00:00:00.000 Started by upstream project "autotest-per-patch" build number 130871 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.099 The recommended git tool is: git 00:00:00.099 using credential 00000000-0000-0000-0000-000000000002 00:00:00.101 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.160 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.212 Using shallow fetch with depth 1 00:00:00.212 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.212 > git --version # timeout=10 00:00:00.256 > git --version # 'git version 2.39.2' 00:00:00.256 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.287 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.287 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.145 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.162 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.177 Checking out Revision f95f9907808933a1db7196e15e13478e0f322ee7 (FETCH_HEAD) 00:00:06.177 > git config core.sparsecheckout # timeout=10 00:00:06.189 > git read-tree -mu HEAD # timeout=10 00:00:06.207 > git checkout -f f95f9907808933a1db7196e15e13478e0f322ee7 # timeout=5 00:00:06.232 Commit message: "Revert "autotest-phy: replace deprecated label for nvmf-cvl"" 00:00:06.232 > git rev-list --no-walk 67cd2f1639a8077ee9fc0f9259e068d0e5b67761 # timeout=10 00:00:06.401 [Pipeline] Start of Pipeline 00:00:06.411 [Pipeline] library 00:00:06.412 Loading library shm_lib@master 00:00:06.412 Library shm_lib@master is cached. Copying from home. 00:00:06.424 [Pipeline] node 00:00:06.434 Running on GP19 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.435 [Pipeline] { 00:00:06.443 [Pipeline] catchError 00:00:06.444 [Pipeline] { 00:00:06.455 [Pipeline] wrap 00:00:06.463 [Pipeline] { 00:00:06.467 [Pipeline] stage 00:00:06.469 [Pipeline] { (Prologue) 00:00:06.760 [Pipeline] sh 00:00:07.043 + logger -p user.info -t JENKINS-CI 00:00:07.057 [Pipeline] echo 00:00:07.058 Node: GP19 00:00:07.063 [Pipeline] sh 00:00:07.360 [Pipeline] setCustomBuildProperty 00:00:07.367 [Pipeline] echo 00:00:07.368 Cleanup processes 00:00:07.372 [Pipeline] sh 00:00:07.650 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.650 1610210 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.663 [Pipeline] sh 00:00:07.947 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.947 ++ grep -v 'sudo pgrep' 00:00:07.947 ++ awk '{print $1}' 00:00:07.947 + sudo kill -9 00:00:07.947 + true 00:00:07.959 [Pipeline] cleanWs 00:00:07.968 [WS-CLEANUP] Deleting project workspace... 00:00:07.968 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.974 [WS-CLEANUP] done 00:00:07.976 [Pipeline] setCustomBuildProperty 00:00:07.986 [Pipeline] sh 00:00:08.265 + sudo git config --global --replace-all safe.directory '*' 00:00:08.416 [Pipeline] httpRequest 00:00:08.741 [Pipeline] echo 00:00:08.742 Sorcerer 10.211.164.101 is alive 00:00:08.747 [Pipeline] retry 00:00:08.748 [Pipeline] { 00:00:08.759 [Pipeline] httpRequest 00:00:08.764 HttpMethod: GET 00:00:08.764 URL: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:08.765 Sending request to url: http://10.211.164.101/packages/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:08.786 Response Code: HTTP/1.1 200 OK 00:00:08.786 Success: Status code 200 is in the accepted range: 200,404 00:00:08.786 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:14.717 [Pipeline] } 00:00:14.732 [Pipeline] // retry 00:00:14.738 [Pipeline] sh 00:00:15.022 + tar --no-same-owner -xf jbp_f95f9907808933a1db7196e15e13478e0f322ee7.tar.gz 00:00:15.038 [Pipeline] httpRequest 00:00:16.787 [Pipeline] echo 00:00:16.788 Sorcerer 10.211.164.101 is alive 00:00:16.794 [Pipeline] retry 00:00:16.795 [Pipeline] { 00:00:16.805 [Pipeline] httpRequest 00:00:16.809 HttpMethod: GET 00:00:16.809 URL: http://10.211.164.101/packages/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:00:16.811 Sending request to url: http://10.211.164.101/packages/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:00:16.834 Response Code: HTTP/1.1 200 OK 00:00:16.835 Success: Status code 200 is in the accepted range: 200,404 00:00:16.835 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:04:52.900 [Pipeline] } 00:04:52.916 [Pipeline] // retry 00:04:52.924 [Pipeline] sh 00:04:53.215 + tar --no-same-owner -xf spdk_d16db39ee342e0479057d263a9944f38a2a1af94.tar.gz 00:04:56.525 [Pipeline] sh 00:04:56.806 + git -C spdk log --oneline -n5 00:04:56.806 d16db39ee bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:04:56.806 32fb30b70 bdev/nvme: changed default config to multipath 00:04:56.806 397c5fc31 bdev/nvme: ctrl config consistency check 00:04:56.806 3950cd1bb bdev/nvme: Change spdk_bdev_reset() to succeed if at least one nvme_ctrlr is reconnected 00:04:56.806 f9141d271 test/blob: Add BLOCKLEN macro in blob_ut 00:04:56.818 [Pipeline] } 00:04:56.831 [Pipeline] // stage 00:04:56.848 [Pipeline] stage 00:04:56.855 [Pipeline] { (Prepare) 00:04:56.889 [Pipeline] writeFile 00:04:56.912 [Pipeline] sh 00:04:57.191 + logger -p user.info -t JENKINS-CI 00:04:57.205 [Pipeline] sh 00:04:57.491 + logger -p user.info -t JENKINS-CI 00:04:57.503 [Pipeline] sh 00:04:57.790 + cat autorun-spdk.conf 00:04:57.790 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:57.790 SPDK_TEST_NVMF=1 00:04:57.790 SPDK_TEST_NVME_CLI=1 00:04:57.790 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:57.790 SPDK_TEST_NVMF_NICS=e810 00:04:57.790 SPDK_TEST_VFIOUSER=1 00:04:57.790 SPDK_RUN_UBSAN=1 00:04:57.790 NET_TYPE=phy 00:04:57.799 RUN_NIGHTLY=0 00:04:57.803 [Pipeline] readFile 00:04:57.826 [Pipeline] withEnv 00:04:57.828 [Pipeline] { 00:04:57.837 [Pipeline] sh 00:04:58.120 + set -ex 00:04:58.120 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:04:58.120 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:58.120 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:58.120 ++ SPDK_TEST_NVMF=1 00:04:58.120 ++ SPDK_TEST_NVME_CLI=1 00:04:58.120 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:58.120 ++ SPDK_TEST_NVMF_NICS=e810 00:04:58.120 ++ SPDK_TEST_VFIOUSER=1 00:04:58.120 ++ SPDK_RUN_UBSAN=1 00:04:58.120 ++ NET_TYPE=phy 00:04:58.120 ++ RUN_NIGHTLY=0 00:04:58.120 + case $SPDK_TEST_NVMF_NICS in 00:04:58.120 + DRIVERS=ice 00:04:58.120 + [[ tcp == \r\d\m\a ]] 00:04:58.120 + [[ -n ice ]] 00:04:58.120 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:04:58.120 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:04:58.120 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:04:58.120 rmmod: ERROR: Module i40iw is not currently loaded 00:04:58.120 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:04:58.120 + true 00:04:58.120 + for D in $DRIVERS 00:04:58.120 + sudo modprobe ice 00:04:58.120 + exit 0 00:04:58.129 [Pipeline] } 00:04:58.141 [Pipeline] // withEnv 00:04:58.146 [Pipeline] } 00:04:58.157 [Pipeline] // stage 00:04:58.165 [Pipeline] catchError 00:04:58.165 [Pipeline] { 00:04:58.176 [Pipeline] timeout 00:04:58.176 Timeout set to expire in 1 hr 0 min 00:04:58.177 [Pipeline] { 00:04:58.189 [Pipeline] stage 00:04:58.190 [Pipeline] { (Tests) 00:04:58.202 [Pipeline] sh 00:04:58.489 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:58.489 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:58.489 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:58.489 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:04:58.489 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:58.489 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:58.489 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:04:58.489 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:58.489 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:04:58.489 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:04:58.489 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:04:58.489 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:04:58.489 + source /etc/os-release 00:04:58.489 ++ NAME='Fedora Linux' 00:04:58.489 ++ VERSION='39 (Cloud Edition)' 00:04:58.489 ++ ID=fedora 00:04:58.489 ++ VERSION_ID=39 00:04:58.489 ++ VERSION_CODENAME= 00:04:58.489 ++ PLATFORM_ID=platform:f39 00:04:58.489 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:04:58.489 ++ ANSI_COLOR='0;38;2;60;110;180' 00:04:58.489 ++ LOGO=fedora-logo-icon 00:04:58.489 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:04:58.489 ++ HOME_URL=https://fedoraproject.org/ 00:04:58.489 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:04:58.489 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:04:58.489 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:04:58.489 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:04:58.489 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:04:58.489 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:04:58.489 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:04:58.489 ++ SUPPORT_END=2024-11-12 00:04:58.489 ++ VARIANT='Cloud Edition' 00:04:58.489 ++ VARIANT_ID=cloud 00:04:58.489 + uname -a 00:04:58.489 Linux spdk-gp-19 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:04:58.489 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:59.426 Hugepages 00:04:59.426 node hugesize free / total 00:04:59.426 node0 1048576kB 0 / 0 00:04:59.426 node0 2048kB 0 / 0 00:04:59.426 node1 1048576kB 0 / 0 00:04:59.426 node1 2048kB 0 / 0 00:04:59.426 00:04:59.426 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:59.426 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:59.426 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:59.685 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:59.685 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:59.685 + rm -f /tmp/spdk-ld-path 00:04:59.685 + source autorun-spdk.conf 00:04:59.685 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:59.685 ++ SPDK_TEST_NVMF=1 00:04:59.685 ++ SPDK_TEST_NVME_CLI=1 00:04:59.685 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:59.685 ++ SPDK_TEST_NVMF_NICS=e810 00:04:59.685 ++ SPDK_TEST_VFIOUSER=1 00:04:59.685 ++ SPDK_RUN_UBSAN=1 00:04:59.685 ++ NET_TYPE=phy 00:04:59.685 ++ RUN_NIGHTLY=0 00:04:59.685 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:04:59.685 + [[ -n '' ]] 00:04:59.685 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:59.685 + for M in /var/spdk/build-*-manifest.txt 00:04:59.685 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:04:59.685 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:59.685 + for M in /var/spdk/build-*-manifest.txt 00:04:59.685 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:04:59.685 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:59.685 + for M in /var/spdk/build-*-manifest.txt 00:04:59.685 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:04:59.685 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:04:59.685 ++ uname 00:04:59.685 + [[ Linux == \L\i\n\u\x ]] 00:04:59.685 + sudo dmesg -T 00:04:59.685 + sudo dmesg --clear 00:04:59.685 + dmesg_pid=1612138 00:04:59.685 + [[ Fedora Linux == FreeBSD ]] 00:04:59.685 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.685 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:04:59.685 + sudo dmesg -Tw 00:04:59.685 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:04:59.685 + [[ -x /usr/src/fio-static/fio ]] 00:04:59.685 + export FIO_BIN=/usr/src/fio-static/fio 00:04:59.685 + FIO_BIN=/usr/src/fio-static/fio 00:04:59.685 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:04:59.685 + [[ ! -v VFIO_QEMU_BIN ]] 00:04:59.685 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:04:59.685 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.685 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:04:59.685 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:04:59.685 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.685 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:04:59.685 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:59.685 Test configuration: 00:04:59.685 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:59.685 SPDK_TEST_NVMF=1 00:04:59.685 SPDK_TEST_NVME_CLI=1 00:04:59.685 SPDK_TEST_NVMF_TRANSPORT=tcp 00:04:59.685 SPDK_TEST_NVMF_NICS=e810 00:04:59.685 SPDK_TEST_VFIOUSER=1 00:04:59.685 SPDK_RUN_UBSAN=1 00:04:59.685 NET_TYPE=phy 00:04:59.685 RUN_NIGHTLY=0 13:15:41 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:04:59.685 13:15:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:59.685 13:15:41 -- scripts/common.sh@15 -- $ shopt -s extglob 00:04:59.685 13:15:41 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:04:59.685 13:15:41 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.685 13:15:41 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.685 13:15:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.685 13:15:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.686 13:15:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.686 13:15:41 -- paths/export.sh@5 -- $ export PATH 00:04:59.686 13:15:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.686 13:15:41 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:59.686 13:15:41 -- common/autobuild_common.sh@486 -- $ date +%s 00:04:59.686 13:15:41 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728299741.XXXXXX 00:04:59.686 13:15:41 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728299741.lJLiKQ 00:04:59.686 13:15:41 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:04:59.686 13:15:41 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:04:59.686 13:15:41 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:04:59.686 13:15:41 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:04:59.686 13:15:41 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:04:59.686 13:15:41 -- common/autobuild_common.sh@502 -- $ get_config_params 00:04:59.686 13:15:41 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:04:59.686 13:15:41 -- common/autotest_common.sh@10 -- $ set +x 00:04:59.686 13:15:41 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:04:59.686 13:15:41 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:04:59.686 13:15:41 -- pm/common@17 -- $ local monitor 00:04:59.686 13:15:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.686 13:15:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.686 13:15:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.686 13:15:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.686 13:15:41 -- pm/common@21 -- $ date +%s 00:04:59.686 13:15:41 -- pm/common@21 -- $ date +%s 00:04:59.686 13:15:41 -- pm/common@25 -- $ sleep 1 00:04:59.686 13:15:41 -- pm/common@21 -- $ date +%s 00:04:59.686 13:15:41 -- pm/common@21 -- $ date +%s 00:04:59.686 13:15:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728299741 00:04:59.686 13:15:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728299741 00:04:59.686 13:15:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728299741 00:04:59.686 13:15:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1728299741 00:04:59.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728299741_collect-vmstat.pm.log 00:04:59.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728299741_collect-cpu-temp.pm.log 00:04:59.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728299741_collect-cpu-load.pm.log 00:04:59.945 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1728299741_collect-bmc-pm.bmc.pm.log 00:05:00.885 13:15:42 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:00.885 13:15:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:00.885 13:15:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:00.885 13:15:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:00.885 13:15:42 -- spdk/autobuild.sh@16 -- $ date -u 00:05:00.885 Mon Oct 7 11:15:42 AM UTC 2024 00:05:00.885 13:15:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:00.885 v25.01-pre-38-gd16db39ee 00:05:00.885 13:15:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:00.885 13:15:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:00.885 13:15:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:00.885 13:15:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:00.885 13:15:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:00.885 13:15:42 -- common/autotest_common.sh@10 -- $ set +x 00:05:00.885 ************************************ 00:05:00.885 START TEST ubsan 00:05:00.885 ************************************ 00:05:00.885 13:15:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:05:00.885 using ubsan 00:05:00.885 00:05:00.885 real 0m0.000s 00:05:00.885 user 0m0.000s 00:05:00.885 sys 0m0.000s 00:05:00.885 13:15:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:00.885 13:15:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:00.885 ************************************ 00:05:00.885 END TEST ubsan 00:05:00.885 ************************************ 00:05:00.885 13:15:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:00.885 13:15:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:00.885 13:15:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:00.885 13:15:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:00.885 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:00.885 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:01.142 Using 'verbs' RDMA provider 00:05:11.803 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:21.791 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:22.050 Creating mk/config.mk...done. 00:05:22.050 Creating mk/cc.flags.mk...done. 00:05:22.050 Type 'make' to build. 00:05:22.050 13:16:03 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:05:22.050 13:16:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:05:22.050 13:16:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:05:22.050 13:16:03 -- common/autotest_common.sh@10 -- $ set +x 00:05:22.050 ************************************ 00:05:22.050 START TEST make 00:05:22.050 ************************************ 00:05:22.050 13:16:03 make -- common/autotest_common.sh@1125 -- $ make -j48 00:05:22.311 make[1]: Nothing to be done for 'all'. 00:05:24.243 The Meson build system 00:05:24.243 Version: 1.5.0 00:05:24.243 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:24.243 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:24.243 Build type: native build 00:05:24.243 Project name: libvfio-user 00:05:24.243 Project version: 0.0.1 00:05:24.243 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:24.243 C linker for the host machine: cc ld.bfd 2.40-14 00:05:24.243 Host machine cpu family: x86_64 00:05:24.243 Host machine cpu: x86_64 00:05:24.243 Run-time dependency threads found: YES 00:05:24.243 Library dl found: YES 00:05:24.243 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:24.243 Run-time dependency json-c found: YES 0.17 00:05:24.243 Run-time dependency cmocka found: YES 1.1.7 00:05:24.243 Program pytest-3 found: NO 00:05:24.243 Program flake8 found: NO 00:05:24.243 Program misspell-fixer found: NO 00:05:24.243 Program restructuredtext-lint found: NO 00:05:24.243 Program valgrind found: YES (/usr/bin/valgrind) 00:05:24.244 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:24.244 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:24.244 Compiler for C supports arguments -Wwrite-strings: YES 00:05:24.244 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.244 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:24.244 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:24.244 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:24.244 Build targets in project: 8 00:05:24.244 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:24.244 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:24.244 00:05:24.244 libvfio-user 0.0.1 00:05:24.244 00:05:24.244 User defined options 00:05:24.244 buildtype : debug 00:05:24.244 default_library: shared 00:05:24.244 libdir : /usr/local/lib 00:05:24.244 00:05:24.244 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:24.816 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:25.080 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:25.080 [2/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:25.080 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:25.080 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:25.080 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:25.080 [6/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:25.080 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:25.343 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:25.343 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:25.343 [10/37] Compiling C object samples/null.p/null.c.o 00:05:25.343 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:25.343 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:25.343 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:25.343 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:25.343 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:25.343 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:25.343 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:25.343 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:25.343 [19/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:25.343 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:25.343 [21/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:25.343 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:25.343 [23/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:25.343 [24/37] Compiling C object samples/server.p/server.c.o 00:05:25.343 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:25.343 [26/37] Compiling C object samples/client.p/client.c.o 00:05:25.343 [27/37] Linking target samples/client 00:05:25.604 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:25.604 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:25.604 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:25.604 [31/37] Linking target test/unit_tests 00:05:25.866 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:25.866 [33/37] Linking target samples/lspci 00:05:25.866 [34/37] Linking target samples/server 00:05:25.866 [35/37] Linking target samples/gpio-pci-idio-16 00:05:25.866 [36/37] Linking target samples/shadow_ioeventfd_server 00:05:25.866 [37/37] Linking target samples/null 00:05:25.866 INFO: autodetecting backend as ninja 00:05:25.866 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:25.867 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:26.809 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:26.809 ninja: no work to do. 00:05:32.078 The Meson build system 00:05:32.078 Version: 1.5.0 00:05:32.078 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:32.078 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:32.078 Build type: native build 00:05:32.078 Program cat found: YES (/usr/bin/cat) 00:05:32.078 Project name: DPDK 00:05:32.078 Project version: 24.03.0 00:05:32.078 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:32.078 C linker for the host machine: cc ld.bfd 2.40-14 00:05:32.078 Host machine cpu family: x86_64 00:05:32.078 Host machine cpu: x86_64 00:05:32.078 Message: ## Building in Developer Mode ## 00:05:32.078 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:32.078 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:32.078 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:32.078 Program python3 found: YES (/usr/bin/python3) 00:05:32.078 Program cat found: YES (/usr/bin/cat) 00:05:32.078 Compiler for C supports arguments -march=native: YES 00:05:32.078 Checking for size of "void *" : 8 00:05:32.078 Checking for size of "void *" : 8 (cached) 00:05:32.078 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:32.078 Library m found: YES 00:05:32.078 Library numa found: YES 00:05:32.078 Has header "numaif.h" : YES 00:05:32.078 Library fdt found: NO 00:05:32.078 Library execinfo found: NO 00:05:32.078 Has header "execinfo.h" : YES 00:05:32.078 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:32.078 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:32.078 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:32.078 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:32.078 Run-time dependency openssl found: YES 3.1.1 00:05:32.078 Run-time dependency libpcap found: YES 1.10.4 00:05:32.078 Has header "pcap.h" with dependency libpcap: YES 00:05:32.078 Compiler for C supports arguments -Wcast-qual: YES 00:05:32.078 Compiler for C supports arguments -Wdeprecated: YES 00:05:32.078 Compiler for C supports arguments -Wformat: YES 00:05:32.078 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:32.079 Compiler for C supports arguments -Wformat-security: NO 00:05:32.079 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:32.079 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:32.079 Compiler for C supports arguments -Wnested-externs: YES 00:05:32.079 Compiler for C supports arguments -Wold-style-definition: YES 00:05:32.079 Compiler for C supports arguments -Wpointer-arith: YES 00:05:32.079 Compiler for C supports arguments -Wsign-compare: YES 00:05:32.079 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:32.079 Compiler for C supports arguments -Wundef: YES 00:05:32.079 Compiler for C supports arguments -Wwrite-strings: YES 00:05:32.079 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:32.079 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:32.079 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:32.079 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:32.079 Program objdump found: YES (/usr/bin/objdump) 00:05:32.079 Compiler for C supports arguments -mavx512f: YES 00:05:32.079 Checking if "AVX512 checking" compiles: YES 00:05:32.079 Fetching value of define "__SSE4_2__" : 1 00:05:32.079 Fetching value of define "__AES__" : 1 00:05:32.079 Fetching value of define "__AVX__" : 1 00:05:32.079 Fetching value of define "__AVX2__" : (undefined) 00:05:32.079 Fetching value of define "__AVX512BW__" : (undefined) 00:05:32.079 Fetching value of define "__AVX512CD__" : (undefined) 00:05:32.079 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:32.079 Fetching value of define "__AVX512F__" : (undefined) 00:05:32.079 Fetching value of define "__AVX512VL__" : (undefined) 00:05:32.079 Fetching value of define "__PCLMUL__" : 1 00:05:32.079 Fetching value of define "__RDRND__" : 1 00:05:32.079 Fetching value of define "__RDSEED__" : (undefined) 00:05:32.079 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:32.079 Fetching value of define "__znver1__" : (undefined) 00:05:32.079 Fetching value of define "__znver2__" : (undefined) 00:05:32.079 Fetching value of define "__znver3__" : (undefined) 00:05:32.079 Fetching value of define "__znver4__" : (undefined) 00:05:32.079 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:32.079 Message: lib/log: Defining dependency "log" 00:05:32.079 Message: lib/kvargs: Defining dependency "kvargs" 00:05:32.079 Message: lib/telemetry: Defining dependency "telemetry" 00:05:32.079 Checking for function "getentropy" : NO 00:05:32.079 Message: lib/eal: Defining dependency "eal" 00:05:32.079 Message: lib/ring: Defining dependency "ring" 00:05:32.079 Message: lib/rcu: Defining dependency "rcu" 00:05:32.079 Message: lib/mempool: Defining dependency "mempool" 00:05:32.079 Message: lib/mbuf: Defining dependency "mbuf" 00:05:32.079 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:32.079 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:32.079 Compiler for C supports arguments -mpclmul: YES 00:05:32.079 Compiler for C supports arguments -maes: YES 00:05:32.079 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:32.079 Compiler for C supports arguments -mavx512bw: YES 00:05:32.079 Compiler for C supports arguments -mavx512dq: YES 00:05:32.079 Compiler for C supports arguments -mavx512vl: YES 00:05:32.079 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:32.079 Compiler for C supports arguments -mavx2: YES 00:05:32.079 Compiler for C supports arguments -mavx: YES 00:05:32.079 Message: lib/net: Defining dependency "net" 00:05:32.079 Message: lib/meter: Defining dependency "meter" 00:05:32.079 Message: lib/ethdev: Defining dependency "ethdev" 00:05:32.079 Message: lib/pci: Defining dependency "pci" 00:05:32.079 Message: lib/cmdline: Defining dependency "cmdline" 00:05:32.079 Message: lib/hash: Defining dependency "hash" 00:05:32.079 Message: lib/timer: Defining dependency "timer" 00:05:32.079 Message: lib/compressdev: Defining dependency "compressdev" 00:05:32.079 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:32.079 Message: lib/dmadev: Defining dependency "dmadev" 00:05:32.079 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:32.079 Message: lib/power: Defining dependency "power" 00:05:32.079 Message: lib/reorder: Defining dependency "reorder" 00:05:32.079 Message: lib/security: Defining dependency "security" 00:05:32.079 Has header "linux/userfaultfd.h" : YES 00:05:32.079 Has header "linux/vduse.h" : YES 00:05:32.079 Message: lib/vhost: Defining dependency "vhost" 00:05:32.079 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:32.079 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:32.079 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:32.079 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:32.079 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:32.079 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:32.079 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:32.079 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:32.079 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:32.079 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:32.079 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:32.079 Configuring doxy-api-html.conf using configuration 00:05:32.079 Configuring doxy-api-man.conf using configuration 00:05:32.079 Program mandb found: YES (/usr/bin/mandb) 00:05:32.079 Program sphinx-build found: NO 00:05:32.079 Configuring rte_build_config.h using configuration 00:05:32.079 Message: 00:05:32.079 ================= 00:05:32.079 Applications Enabled 00:05:32.079 ================= 00:05:32.079 00:05:32.079 apps: 00:05:32.079 00:05:32.079 00:05:32.079 Message: 00:05:32.079 ================= 00:05:32.079 Libraries Enabled 00:05:32.079 ================= 00:05:32.079 00:05:32.079 libs: 00:05:32.079 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:32.079 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:32.079 cryptodev, dmadev, power, reorder, security, vhost, 00:05:32.079 00:05:32.079 Message: 00:05:32.079 =============== 00:05:32.079 Drivers Enabled 00:05:32.079 =============== 00:05:32.079 00:05:32.079 common: 00:05:32.079 00:05:32.079 bus: 00:05:32.079 pci, vdev, 00:05:32.079 mempool: 00:05:32.079 ring, 00:05:32.079 dma: 00:05:32.079 00:05:32.079 net: 00:05:32.079 00:05:32.079 crypto: 00:05:32.079 00:05:32.079 compress: 00:05:32.079 00:05:32.079 vdpa: 00:05:32.079 00:05:32.079 00:05:32.079 Message: 00:05:32.079 ================= 00:05:32.079 Content Skipped 00:05:32.079 ================= 00:05:32.079 00:05:32.079 apps: 00:05:32.079 dumpcap: explicitly disabled via build config 00:05:32.079 graph: explicitly disabled via build config 00:05:32.079 pdump: explicitly disabled via build config 00:05:32.079 proc-info: explicitly disabled via build config 00:05:32.079 test-acl: explicitly disabled via build config 00:05:32.079 test-bbdev: explicitly disabled via build config 00:05:32.079 test-cmdline: explicitly disabled via build config 00:05:32.079 test-compress-perf: explicitly disabled via build config 00:05:32.079 test-crypto-perf: explicitly disabled via build config 00:05:32.079 test-dma-perf: explicitly disabled via build config 00:05:32.079 test-eventdev: explicitly disabled via build config 00:05:32.079 test-fib: explicitly disabled via build config 00:05:32.079 test-flow-perf: explicitly disabled via build config 00:05:32.079 test-gpudev: explicitly disabled via build config 00:05:32.079 test-mldev: explicitly disabled via build config 00:05:32.079 test-pipeline: explicitly disabled via build config 00:05:32.079 test-pmd: explicitly disabled via build config 00:05:32.079 test-regex: explicitly disabled via build config 00:05:32.079 test-sad: explicitly disabled via build config 00:05:32.079 test-security-perf: explicitly disabled via build config 00:05:32.079 00:05:32.079 libs: 00:05:32.079 argparse: explicitly disabled via build config 00:05:32.079 metrics: explicitly disabled via build config 00:05:32.079 acl: explicitly disabled via build config 00:05:32.079 bbdev: explicitly disabled via build config 00:05:32.079 bitratestats: explicitly disabled via build config 00:05:32.079 bpf: explicitly disabled via build config 00:05:32.079 cfgfile: explicitly disabled via build config 00:05:32.079 distributor: explicitly disabled via build config 00:05:32.079 efd: explicitly disabled via build config 00:05:32.079 eventdev: explicitly disabled via build config 00:05:32.079 dispatcher: explicitly disabled via build config 00:05:32.079 gpudev: explicitly disabled via build config 00:05:32.079 gro: explicitly disabled via build config 00:05:32.079 gso: explicitly disabled via build config 00:05:32.079 ip_frag: explicitly disabled via build config 00:05:32.079 jobstats: explicitly disabled via build config 00:05:32.079 latencystats: explicitly disabled via build config 00:05:32.079 lpm: explicitly disabled via build config 00:05:32.079 member: explicitly disabled via build config 00:05:32.079 pcapng: explicitly disabled via build config 00:05:32.079 rawdev: explicitly disabled via build config 00:05:32.079 regexdev: explicitly disabled via build config 00:05:32.079 mldev: explicitly disabled via build config 00:05:32.079 rib: explicitly disabled via build config 00:05:32.079 sched: explicitly disabled via build config 00:05:32.079 stack: explicitly disabled via build config 00:05:32.079 ipsec: explicitly disabled via build config 00:05:32.079 pdcp: explicitly disabled via build config 00:05:32.079 fib: explicitly disabled via build config 00:05:32.079 port: explicitly disabled via build config 00:05:32.079 pdump: explicitly disabled via build config 00:05:32.079 table: explicitly disabled via build config 00:05:32.079 pipeline: explicitly disabled via build config 00:05:32.079 graph: explicitly disabled via build config 00:05:32.079 node: explicitly disabled via build config 00:05:32.079 00:05:32.079 drivers: 00:05:32.079 common/cpt: not in enabled drivers build config 00:05:32.079 common/dpaax: not in enabled drivers build config 00:05:32.079 common/iavf: not in enabled drivers build config 00:05:32.079 common/idpf: not in enabled drivers build config 00:05:32.079 common/ionic: not in enabled drivers build config 00:05:32.079 common/mvep: not in enabled drivers build config 00:05:32.079 common/octeontx: not in enabled drivers build config 00:05:32.079 bus/auxiliary: not in enabled drivers build config 00:05:32.079 bus/cdx: not in enabled drivers build config 00:05:32.079 bus/dpaa: not in enabled drivers build config 00:05:32.079 bus/fslmc: not in enabled drivers build config 00:05:32.079 bus/ifpga: not in enabled drivers build config 00:05:32.079 bus/platform: not in enabled drivers build config 00:05:32.079 bus/uacce: not in enabled drivers build config 00:05:32.079 bus/vmbus: not in enabled drivers build config 00:05:32.080 common/cnxk: not in enabled drivers build config 00:05:32.080 common/mlx5: not in enabled drivers build config 00:05:32.080 common/nfp: not in enabled drivers build config 00:05:32.080 common/nitrox: not in enabled drivers build config 00:05:32.080 common/qat: not in enabled drivers build config 00:05:32.080 common/sfc_efx: not in enabled drivers build config 00:05:32.080 mempool/bucket: not in enabled drivers build config 00:05:32.080 mempool/cnxk: not in enabled drivers build config 00:05:32.080 mempool/dpaa: not in enabled drivers build config 00:05:32.080 mempool/dpaa2: not in enabled drivers build config 00:05:32.080 mempool/octeontx: not in enabled drivers build config 00:05:32.080 mempool/stack: not in enabled drivers build config 00:05:32.080 dma/cnxk: not in enabled drivers build config 00:05:32.080 dma/dpaa: not in enabled drivers build config 00:05:32.080 dma/dpaa2: not in enabled drivers build config 00:05:32.080 dma/hisilicon: not in enabled drivers build config 00:05:32.080 dma/idxd: not in enabled drivers build config 00:05:32.080 dma/ioat: not in enabled drivers build config 00:05:32.080 dma/skeleton: not in enabled drivers build config 00:05:32.080 net/af_packet: not in enabled drivers build config 00:05:32.080 net/af_xdp: not in enabled drivers build config 00:05:32.080 net/ark: not in enabled drivers build config 00:05:32.080 net/atlantic: not in enabled drivers build config 00:05:32.080 net/avp: not in enabled drivers build config 00:05:32.080 net/axgbe: not in enabled drivers build config 00:05:32.080 net/bnx2x: not in enabled drivers build config 00:05:32.080 net/bnxt: not in enabled drivers build config 00:05:32.080 net/bonding: not in enabled drivers build config 00:05:32.080 net/cnxk: not in enabled drivers build config 00:05:32.080 net/cpfl: not in enabled drivers build config 00:05:32.080 net/cxgbe: not in enabled drivers build config 00:05:32.080 net/dpaa: not in enabled drivers build config 00:05:32.080 net/dpaa2: not in enabled drivers build config 00:05:32.080 net/e1000: not in enabled drivers build config 00:05:32.080 net/ena: not in enabled drivers build config 00:05:32.080 net/enetc: not in enabled drivers build config 00:05:32.080 net/enetfec: not in enabled drivers build config 00:05:32.080 net/enic: not in enabled drivers build config 00:05:32.080 net/failsafe: not in enabled drivers build config 00:05:32.080 net/fm10k: not in enabled drivers build config 00:05:32.080 net/gve: not in enabled drivers build config 00:05:32.080 net/hinic: not in enabled drivers build config 00:05:32.080 net/hns3: not in enabled drivers build config 00:05:32.080 net/i40e: not in enabled drivers build config 00:05:32.080 net/iavf: not in enabled drivers build config 00:05:32.080 net/ice: not in enabled drivers build config 00:05:32.080 net/idpf: not in enabled drivers build config 00:05:32.080 net/igc: not in enabled drivers build config 00:05:32.080 net/ionic: not in enabled drivers build config 00:05:32.080 net/ipn3ke: not in enabled drivers build config 00:05:32.080 net/ixgbe: not in enabled drivers build config 00:05:32.080 net/mana: not in enabled drivers build config 00:05:32.080 net/memif: not in enabled drivers build config 00:05:32.080 net/mlx4: not in enabled drivers build config 00:05:32.080 net/mlx5: not in enabled drivers build config 00:05:32.080 net/mvneta: not in enabled drivers build config 00:05:32.080 net/mvpp2: not in enabled drivers build config 00:05:32.080 net/netvsc: not in enabled drivers build config 00:05:32.080 net/nfb: not in enabled drivers build config 00:05:32.080 net/nfp: not in enabled drivers build config 00:05:32.080 net/ngbe: not in enabled drivers build config 00:05:32.080 net/null: not in enabled drivers build config 00:05:32.080 net/octeontx: not in enabled drivers build config 00:05:32.080 net/octeon_ep: not in enabled drivers build config 00:05:32.080 net/pcap: not in enabled drivers build config 00:05:32.080 net/pfe: not in enabled drivers build config 00:05:32.080 net/qede: not in enabled drivers build config 00:05:32.080 net/ring: not in enabled drivers build config 00:05:32.080 net/sfc: not in enabled drivers build config 00:05:32.080 net/softnic: not in enabled drivers build config 00:05:32.080 net/tap: not in enabled drivers build config 00:05:32.080 net/thunderx: not in enabled drivers build config 00:05:32.080 net/txgbe: not in enabled drivers build config 00:05:32.080 net/vdev_netvsc: not in enabled drivers build config 00:05:32.080 net/vhost: not in enabled drivers build config 00:05:32.080 net/virtio: not in enabled drivers build config 00:05:32.080 net/vmxnet3: not in enabled drivers build config 00:05:32.080 raw/*: missing internal dependency, "rawdev" 00:05:32.080 crypto/armv8: not in enabled drivers build config 00:05:32.080 crypto/bcmfs: not in enabled drivers build config 00:05:32.080 crypto/caam_jr: not in enabled drivers build config 00:05:32.080 crypto/ccp: not in enabled drivers build config 00:05:32.080 crypto/cnxk: not in enabled drivers build config 00:05:32.080 crypto/dpaa_sec: not in enabled drivers build config 00:05:32.080 crypto/dpaa2_sec: not in enabled drivers build config 00:05:32.080 crypto/ipsec_mb: not in enabled drivers build config 00:05:32.080 crypto/mlx5: not in enabled drivers build config 00:05:32.080 crypto/mvsam: not in enabled drivers build config 00:05:32.080 crypto/nitrox: not in enabled drivers build config 00:05:32.080 crypto/null: not in enabled drivers build config 00:05:32.080 crypto/octeontx: not in enabled drivers build config 00:05:32.080 crypto/openssl: not in enabled drivers build config 00:05:32.080 crypto/scheduler: not in enabled drivers build config 00:05:32.080 crypto/uadk: not in enabled drivers build config 00:05:32.080 crypto/virtio: not in enabled drivers build config 00:05:32.080 compress/isal: not in enabled drivers build config 00:05:32.080 compress/mlx5: not in enabled drivers build config 00:05:32.080 compress/nitrox: not in enabled drivers build config 00:05:32.080 compress/octeontx: not in enabled drivers build config 00:05:32.080 compress/zlib: not in enabled drivers build config 00:05:32.080 regex/*: missing internal dependency, "regexdev" 00:05:32.080 ml/*: missing internal dependency, "mldev" 00:05:32.080 vdpa/ifc: not in enabled drivers build config 00:05:32.080 vdpa/mlx5: not in enabled drivers build config 00:05:32.080 vdpa/nfp: not in enabled drivers build config 00:05:32.080 vdpa/sfc: not in enabled drivers build config 00:05:32.080 event/*: missing internal dependency, "eventdev" 00:05:32.080 baseband/*: missing internal dependency, "bbdev" 00:05:32.080 gpu/*: missing internal dependency, "gpudev" 00:05:32.080 00:05:32.080 00:05:32.080 Build targets in project: 85 00:05:32.080 00:05:32.080 DPDK 24.03.0 00:05:32.080 00:05:32.080 User defined options 00:05:32.080 buildtype : debug 00:05:32.080 default_library : shared 00:05:32.080 libdir : lib 00:05:32.080 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:32.080 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:32.080 c_link_args : 00:05:32.080 cpu_instruction_set: native 00:05:32.080 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:32.080 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:32.080 enable_docs : false 00:05:32.080 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:32.080 enable_kmods : false 00:05:32.080 max_lcores : 128 00:05:32.080 tests : false 00:05:32.080 00:05:32.080 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:32.080 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:32.080 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:32.080 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:32.080 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:32.080 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:32.080 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:32.342 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:32.342 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:32.342 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:32.342 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:32.342 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:32.342 [11/268] Linking static target lib/librte_kvargs.a 00:05:32.342 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:32.342 [13/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:32.342 [14/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:32.342 [15/268] Linking static target lib/librte_log.a 00:05:32.342 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:32.914 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:32.914 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:32.914 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:33.176 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:33.176 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:33.176 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:33.176 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:33.176 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:33.176 [25/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:33.176 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:33.176 [27/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:33.176 [28/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:33.176 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:33.176 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:33.176 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:33.176 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:33.176 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:33.176 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:33.176 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:33.176 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:33.176 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:33.176 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:33.176 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:33.176 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:33.176 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:33.176 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:33.176 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:33.176 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:33.176 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:33.176 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:33.176 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:33.176 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:33.176 [49/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:33.176 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:33.176 [51/268] Linking static target lib/librte_telemetry.a 00:05:33.176 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:33.176 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:33.176 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:33.176 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:33.176 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:33.176 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:33.436 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:33.436 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:33.436 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:33.436 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:33.436 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:33.436 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:33.436 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:33.437 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:33.699 [66/268] Linking target lib/librte_log.so.24.1 00:05:33.699 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:33.699 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:33.699 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:33.699 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:33.959 [71/268] Linking static target lib/librte_pci.a 00:05:33.959 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:33.959 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:33.959 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:33.959 [75/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:33.959 [76/268] Linking target lib/librte_kvargs.so.24.1 00:05:33.959 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:33.959 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:33.959 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:33.959 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:33.959 [81/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:33.959 [82/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:33.959 [83/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:33.959 [84/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:33.959 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:33.959 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:33.959 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:34.223 [88/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:34.223 [89/268] Linking static target lib/librte_ring.a 00:05:34.223 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:34.223 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:34.223 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:34.223 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:34.223 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:34.223 [95/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:34.223 [96/268] Linking static target lib/librte_meter.a 00:05:34.223 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:34.223 [98/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:34.223 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:34.223 [100/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:34.223 [101/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:34.223 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:34.223 [103/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:34.223 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:34.223 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:34.223 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:34.223 [107/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:34.223 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:34.223 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:34.223 [110/268] Linking static target lib/librte_mempool.a 00:05:34.489 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:34.489 [112/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:34.489 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:34.489 [114/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:34.489 [115/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:34.489 [116/268] Linking static target lib/librte_rcu.a 00:05:34.489 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:34.489 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:34.489 [119/268] Linking static target lib/librte_eal.a 00:05:34.489 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:34.489 [121/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:34.489 [122/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.489 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:34.489 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:34.489 [125/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.489 [126/268] Linking target lib/librte_telemetry.so.24.1 00:05:34.489 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:34.489 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:34.489 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:34.750 [130/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:34.750 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:34.750 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:34.750 [133/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.750 [134/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:34.750 [135/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:34.750 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:34.750 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:35.013 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:35.013 [139/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:35.013 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:35.013 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:35.013 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:35.013 [143/268] Linking static target lib/librte_cmdline.a 00:05:35.013 [144/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:35.013 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:35.013 [146/268] Linking static target lib/librte_net.a 00:05:35.013 [147/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:35.013 [148/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.276 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:35.276 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:35.276 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:35.276 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:35.276 [153/268] Linking static target lib/librte_timer.a 00:05:35.276 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:35.276 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:35.276 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:35.276 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:35.276 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:35.276 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:35.276 [160/268] Linking static target lib/librte_dmadev.a 00:05:35.535 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:35.535 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.535 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:35.535 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:35.535 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.535 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:35.535 [167/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:35.535 [168/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:35.535 [169/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:35.535 [170/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:35.535 [171/268] Linking static target lib/librte_compressdev.a 00:05:35.535 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:35.535 [173/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:35.535 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:35.535 [175/268] Linking static target lib/librte_power.a 00:05:35.794 [176/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:35.794 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:35.794 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:35.794 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:35.794 [180/268] Linking static target lib/librte_hash.a 00:05:35.794 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:35.794 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:35.794 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:35.794 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:35.794 [185/268] Linking static target lib/librte_reorder.a 00:05:35.794 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:35.794 [187/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:35.794 [188/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:35.794 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:35.794 [190/268] Linking static target lib/librte_mbuf.a 00:05:35.794 [191/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.053 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:36.053 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.053 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:36.053 [195/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:36.053 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:36.053 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:36.053 [198/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:36.053 [199/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.053 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:36.053 [201/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:36.053 [202/268] Linking static target lib/librte_security.a 00:05:36.053 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:36.053 [204/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:36.053 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.053 [206/268] Linking static target drivers/librte_bus_vdev.a 00:05:36.053 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:36.311 [208/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.311 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:36.311 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:36.311 [211/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:36.311 [212/268] Linking static target drivers/librte_mempool_ring.a 00:05:36.311 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:36.311 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:36.311 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:36.312 [216/268] Linking static target drivers/librte_bus_pci.a 00:05:36.312 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.312 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.312 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.570 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:36.570 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:36.570 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:36.570 [223/268] Linking static target lib/librte_ethdev.a 00:05:36.570 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:36.570 [225/268] Linking static target lib/librte_cryptodev.a 00:05:36.833 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:37.768 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:39.145 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:40.522 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.522 [230/268] Linking target lib/librte_eal.so.24.1 00:05:40.522 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:40.522 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:40.522 [233/268] Linking target lib/librte_ring.so.24.1 00:05:40.522 [234/268] Linking target lib/librte_pci.so.24.1 00:05:40.522 [235/268] Linking target lib/librte_timer.so.24.1 00:05:40.522 [236/268] Linking target lib/librte_meter.so.24.1 00:05:40.780 [237/268] Linking target lib/librte_dmadev.so.24.1 00:05:40.780 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:40.780 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:40.780 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:40.780 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:40.780 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:40.780 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:40.780 [244/268] Linking target lib/librte_rcu.so.24.1 00:05:40.780 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:40.780 [246/268] Linking target lib/librte_mempool.so.24.1 00:05:41.037 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:41.037 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:41.037 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:41.037 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:41.037 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:41.037 [252/268] Linking target lib/librte_reorder.so.24.1 00:05:41.037 [253/268] Linking target lib/librte_compressdev.so.24.1 00:05:41.037 [254/268] Linking target lib/librte_net.so.24.1 00:05:41.037 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:41.295 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:41.295 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:41.295 [258/268] Linking target lib/librte_cmdline.so.24.1 00:05:41.295 [259/268] Linking target lib/librte_hash.so.24.1 00:05:41.295 [260/268] Linking target lib/librte_security.so.24.1 00:05:41.295 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:41.554 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:41.554 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:41.554 [264/268] Linking target lib/librte_power.so.24.1 00:05:44.836 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:44.836 [266/268] Linking static target lib/librte_vhost.a 00:05:45.772 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.772 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:45.772 INFO: autodetecting backend as ninja 00:05:45.772 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:06:07.798 CC lib/log/log.o 00:06:07.798 CC lib/ut_mock/mock.o 00:06:07.798 CC lib/log/log_flags.o 00:06:07.798 CC lib/log/log_deprecated.o 00:06:07.798 CC lib/ut/ut.o 00:06:07.798 LIB libspdk_ut.a 00:06:07.798 LIB libspdk_log.a 00:06:07.798 LIB libspdk_ut_mock.a 00:06:07.798 SO libspdk_ut.so.2.0 00:06:07.798 SO libspdk_ut_mock.so.6.0 00:06:07.798 SO libspdk_log.so.7.0 00:06:07.798 SYMLINK libspdk_ut.so 00:06:07.798 SYMLINK libspdk_ut_mock.so 00:06:07.798 SYMLINK libspdk_log.so 00:06:07.798 CC lib/ioat/ioat.o 00:06:07.798 CC lib/dma/dma.o 00:06:07.798 CXX lib/trace_parser/trace.o 00:06:07.798 CC lib/util/base64.o 00:06:07.799 CC lib/util/bit_array.o 00:06:07.799 CC lib/util/cpuset.o 00:06:07.799 CC lib/util/crc16.o 00:06:07.799 CC lib/util/crc32.o 00:06:07.799 CC lib/util/crc32c.o 00:06:07.799 CC lib/util/crc32_ieee.o 00:06:07.799 CC lib/util/crc64.o 00:06:07.799 CC lib/util/dif.o 00:06:07.799 CC lib/util/fd.o 00:06:07.799 CC lib/util/fd_group.o 00:06:07.799 CC lib/util/file.o 00:06:07.799 CC lib/util/hexlify.o 00:06:07.799 CC lib/util/iov.o 00:06:07.799 CC lib/util/math.o 00:06:07.799 CC lib/util/net.o 00:06:07.799 CC lib/util/pipe.o 00:06:07.799 CC lib/util/strerror_tls.o 00:06:07.799 CC lib/util/uuid.o 00:06:07.799 CC lib/util/string.o 00:06:07.799 CC lib/util/xor.o 00:06:07.799 CC lib/util/zipf.o 00:06:07.799 CC lib/util/md5.o 00:06:07.799 CC lib/vfio_user/host/vfio_user_pci.o 00:06:07.799 CC lib/vfio_user/host/vfio_user.o 00:06:07.799 LIB libspdk_dma.a 00:06:07.799 SO libspdk_dma.so.5.0 00:06:07.799 SYMLINK libspdk_dma.so 00:06:07.799 LIB libspdk_ioat.a 00:06:07.799 SO libspdk_ioat.so.7.0 00:06:07.799 LIB libspdk_vfio_user.a 00:06:07.799 SYMLINK libspdk_ioat.so 00:06:07.799 SO libspdk_vfio_user.so.5.0 00:06:07.799 SYMLINK libspdk_vfio_user.so 00:06:07.799 LIB libspdk_util.a 00:06:07.799 SO libspdk_util.so.10.0 00:06:07.799 SYMLINK libspdk_util.so 00:06:07.799 CC lib/vmd/vmd.o 00:06:07.799 CC lib/idxd/idxd.o 00:06:07.799 CC lib/env_dpdk/env.o 00:06:07.799 CC lib/rdma_utils/rdma_utils.o 00:06:07.799 CC lib/json/json_parse.o 00:06:07.799 CC lib/conf/conf.o 00:06:07.799 CC lib/vmd/led.o 00:06:07.799 CC lib/idxd/idxd_user.o 00:06:07.799 CC lib/env_dpdk/memory.o 00:06:07.799 CC lib/rdma_provider/common.o 00:06:07.799 CC lib/json/json_util.o 00:06:07.799 CC lib/idxd/idxd_kernel.o 00:06:07.799 CC lib/env_dpdk/pci.o 00:06:07.799 CC lib/json/json_write.o 00:06:07.799 CC lib/env_dpdk/init.o 00:06:07.799 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:07.799 CC lib/env_dpdk/threads.o 00:06:07.799 CC lib/env_dpdk/pci_ioat.o 00:06:07.799 CC lib/env_dpdk/pci_virtio.o 00:06:07.799 CC lib/env_dpdk/pci_vmd.o 00:06:07.799 CC lib/env_dpdk/pci_idxd.o 00:06:07.799 CC lib/env_dpdk/pci_event.o 00:06:07.799 CC lib/env_dpdk/sigbus_handler.o 00:06:07.799 CC lib/env_dpdk/pci_dpdk.o 00:06:07.799 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:07.799 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:07.799 LIB libspdk_trace_parser.a 00:06:07.799 SO libspdk_trace_parser.so.6.0 00:06:07.799 SYMLINK libspdk_trace_parser.so 00:06:07.799 LIB libspdk_rdma_provider.a 00:06:07.799 SO libspdk_rdma_provider.so.6.0 00:06:07.799 LIB libspdk_conf.a 00:06:07.799 SO libspdk_conf.so.6.0 00:06:07.799 SYMLINK libspdk_rdma_provider.so 00:06:07.799 LIB libspdk_rdma_utils.a 00:06:07.799 SO libspdk_rdma_utils.so.1.0 00:06:07.799 SYMLINK libspdk_conf.so 00:06:07.799 LIB libspdk_json.a 00:06:07.799 SO libspdk_json.so.6.0 00:06:07.799 SYMLINK libspdk_rdma_utils.so 00:06:07.799 SYMLINK libspdk_json.so 00:06:08.057 CC lib/jsonrpc/jsonrpc_server.o 00:06:08.057 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:08.057 CC lib/jsonrpc/jsonrpc_client.o 00:06:08.057 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:08.057 LIB libspdk_idxd.a 00:06:08.057 SO libspdk_idxd.so.12.1 00:06:08.057 SYMLINK libspdk_idxd.so 00:06:08.315 LIB libspdk_vmd.a 00:06:08.315 SO libspdk_vmd.so.6.0 00:06:08.315 LIB libspdk_jsonrpc.a 00:06:08.315 SYMLINK libspdk_vmd.so 00:06:08.315 SO libspdk_jsonrpc.so.6.0 00:06:08.315 SYMLINK libspdk_jsonrpc.so 00:06:08.573 CC lib/rpc/rpc.o 00:06:08.831 LIB libspdk_rpc.a 00:06:08.831 SO libspdk_rpc.so.6.0 00:06:08.831 SYMLINK libspdk_rpc.so 00:06:09.089 CC lib/keyring/keyring.o 00:06:09.089 CC lib/keyring/keyring_rpc.o 00:06:09.089 CC lib/trace/trace.o 00:06:09.089 CC lib/trace/trace_flags.o 00:06:09.089 CC lib/trace/trace_rpc.o 00:06:09.089 CC lib/notify/notify.o 00:06:09.089 CC lib/notify/notify_rpc.o 00:06:09.089 LIB libspdk_notify.a 00:06:09.089 SO libspdk_notify.so.6.0 00:06:09.348 LIB libspdk_keyring.a 00:06:09.348 SYMLINK libspdk_notify.so 00:06:09.348 SO libspdk_keyring.so.2.0 00:06:09.348 LIB libspdk_trace.a 00:06:09.348 SO libspdk_trace.so.11.0 00:06:09.348 SYMLINK libspdk_keyring.so 00:06:09.348 SYMLINK libspdk_trace.so 00:06:09.605 CC lib/thread/thread.o 00:06:09.605 CC lib/thread/iobuf.o 00:06:09.605 CC lib/sock/sock.o 00:06:09.605 CC lib/sock/sock_rpc.o 00:06:09.605 LIB libspdk_env_dpdk.a 00:06:09.605 SO libspdk_env_dpdk.so.15.0 00:06:09.605 SYMLINK libspdk_env_dpdk.so 00:06:09.863 LIB libspdk_sock.a 00:06:09.863 SO libspdk_sock.so.10.0 00:06:09.863 SYMLINK libspdk_sock.so 00:06:10.122 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:10.122 CC lib/nvme/nvme_ctrlr.o 00:06:10.122 CC lib/nvme/nvme_fabric.o 00:06:10.122 CC lib/nvme/nvme_ns_cmd.o 00:06:10.122 CC lib/nvme/nvme_ns.o 00:06:10.122 CC lib/nvme/nvme_pcie_common.o 00:06:10.122 CC lib/nvme/nvme_pcie.o 00:06:10.122 CC lib/nvme/nvme_qpair.o 00:06:10.122 CC lib/nvme/nvme.o 00:06:10.122 CC lib/nvme/nvme_quirks.o 00:06:10.122 CC lib/nvme/nvme_transport.o 00:06:10.122 CC lib/nvme/nvme_discovery.o 00:06:10.122 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:10.122 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:10.122 CC lib/nvme/nvme_tcp.o 00:06:10.122 CC lib/nvme/nvme_opal.o 00:06:10.122 CC lib/nvme/nvme_io_msg.o 00:06:10.122 CC lib/nvme/nvme_poll_group.o 00:06:10.122 CC lib/nvme/nvme_zns.o 00:06:10.122 CC lib/nvme/nvme_stubs.o 00:06:10.122 CC lib/nvme/nvme_auth.o 00:06:10.122 CC lib/nvme/nvme_cuse.o 00:06:10.122 CC lib/nvme/nvme_vfio_user.o 00:06:10.122 CC lib/nvme/nvme_rdma.o 00:06:11.059 LIB libspdk_thread.a 00:06:11.059 SO libspdk_thread.so.10.2 00:06:11.317 SYMLINK libspdk_thread.so 00:06:11.317 CC lib/vfu_tgt/tgt_endpoint.o 00:06:11.317 CC lib/accel/accel.o 00:06:11.317 CC lib/init/json_config.o 00:06:11.317 CC lib/fsdev/fsdev.o 00:06:11.317 CC lib/accel/accel_rpc.o 00:06:11.317 CC lib/vfu_tgt/tgt_rpc.o 00:06:11.317 CC lib/fsdev/fsdev_io.o 00:06:11.317 CC lib/init/subsystem.o 00:06:11.317 CC lib/blob/blobstore.o 00:06:11.317 CC lib/accel/accel_sw.o 00:06:11.317 CC lib/virtio/virtio.o 00:06:11.317 CC lib/fsdev/fsdev_rpc.o 00:06:11.317 CC lib/virtio/virtio_vhost_user.o 00:06:11.317 CC lib/init/subsystem_rpc.o 00:06:11.317 CC lib/blob/request.o 00:06:11.317 CC lib/virtio/virtio_vfio_user.o 00:06:11.317 CC lib/blob/zeroes.o 00:06:11.317 CC lib/init/rpc.o 00:06:11.317 CC lib/virtio/virtio_pci.o 00:06:11.317 CC lib/blob/blob_bs_dev.o 00:06:11.575 LIB libspdk_init.a 00:06:11.575 SO libspdk_init.so.6.0 00:06:11.832 LIB libspdk_virtio.a 00:06:11.832 SYMLINK libspdk_init.so 00:06:11.832 SO libspdk_virtio.so.7.0 00:06:11.832 LIB libspdk_vfu_tgt.a 00:06:11.832 SO libspdk_vfu_tgt.so.3.0 00:06:11.832 SYMLINK libspdk_virtio.so 00:06:11.832 SYMLINK libspdk_vfu_tgt.so 00:06:11.832 CC lib/event/app.o 00:06:11.832 CC lib/event/reactor.o 00:06:11.832 CC lib/event/log_rpc.o 00:06:11.832 CC lib/event/app_rpc.o 00:06:11.832 CC lib/event/scheduler_static.o 00:06:12.089 LIB libspdk_fsdev.a 00:06:12.089 SO libspdk_fsdev.so.1.0 00:06:12.089 SYMLINK libspdk_fsdev.so 00:06:12.347 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:12.347 LIB libspdk_event.a 00:06:12.347 SO libspdk_event.so.15.0 00:06:12.347 SYMLINK libspdk_event.so 00:06:12.606 LIB libspdk_accel.a 00:06:12.606 SO libspdk_accel.so.16.0 00:06:12.606 SYMLINK libspdk_accel.so 00:06:12.606 LIB libspdk_nvme.a 00:06:12.864 SO libspdk_nvme.so.14.0 00:06:12.864 CC lib/bdev/bdev.o 00:06:12.864 CC lib/bdev/bdev_rpc.o 00:06:12.864 CC lib/bdev/bdev_zone.o 00:06:12.864 CC lib/bdev/part.o 00:06:12.864 CC lib/bdev/scsi_nvme.o 00:06:12.864 LIB libspdk_fuse_dispatcher.a 00:06:13.121 SO libspdk_fuse_dispatcher.so.1.0 00:06:13.121 SYMLINK libspdk_nvme.so 00:06:13.121 SYMLINK libspdk_fuse_dispatcher.so 00:06:14.496 LIB libspdk_blob.a 00:06:14.496 SO libspdk_blob.so.11.0 00:06:14.496 SYMLINK libspdk_blob.so 00:06:14.754 CC lib/lvol/lvol.o 00:06:14.754 CC lib/blobfs/blobfs.o 00:06:14.754 CC lib/blobfs/tree.o 00:06:15.320 LIB libspdk_bdev.a 00:06:15.320 SO libspdk_bdev.so.17.0 00:06:15.581 SYMLINK libspdk_bdev.so 00:06:15.581 LIB libspdk_blobfs.a 00:06:15.581 SO libspdk_blobfs.so.10.0 00:06:15.581 SYMLINK libspdk_blobfs.so 00:06:15.581 CC lib/nbd/nbd.o 00:06:15.581 CC lib/scsi/dev.o 00:06:15.581 CC lib/ublk/ublk.o 00:06:15.581 CC lib/nvmf/ctrlr.o 00:06:15.581 CC lib/scsi/lun.o 00:06:15.581 CC lib/nbd/nbd_rpc.o 00:06:15.581 CC lib/ublk/ublk_rpc.o 00:06:15.581 CC lib/nvmf/ctrlr_discovery.o 00:06:15.581 CC lib/ftl/ftl_core.o 00:06:15.581 CC lib/scsi/port.o 00:06:15.581 CC lib/nvmf/ctrlr_bdev.o 00:06:15.581 CC lib/ftl/ftl_init.o 00:06:15.581 CC lib/scsi/scsi.o 00:06:15.581 CC lib/ftl/ftl_layout.o 00:06:15.581 CC lib/nvmf/subsystem.o 00:06:15.581 CC lib/scsi/scsi_bdev.o 00:06:15.581 CC lib/ftl/ftl_debug.o 00:06:15.581 CC lib/nvmf/nvmf.o 00:06:15.581 CC lib/nvmf/nvmf_rpc.o 00:06:15.581 CC lib/scsi/scsi_rpc.o 00:06:15.581 CC lib/scsi/scsi_pr.o 00:06:15.581 CC lib/ftl/ftl_io.o 00:06:15.581 CC lib/nvmf/transport.o 00:06:15.581 CC lib/ftl/ftl_sb.o 00:06:15.581 CC lib/nvmf/tcp.o 00:06:15.582 CC lib/scsi/task.o 00:06:15.582 CC lib/ftl/ftl_l2p_flat.o 00:06:15.582 CC lib/ftl/ftl_l2p.o 00:06:15.582 CC lib/nvmf/stubs.o 00:06:15.582 CC lib/ftl/ftl_nv_cache.o 00:06:15.582 CC lib/nvmf/mdns_server.o 00:06:15.582 CC lib/ftl/ftl_band.o 00:06:15.582 CC lib/nvmf/vfio_user.o 00:06:15.582 CC lib/ftl/ftl_band_ops.o 00:06:15.582 CC lib/ftl/ftl_writer.o 00:06:15.582 CC lib/nvmf/rdma.o 00:06:15.582 CC lib/nvmf/auth.o 00:06:15.582 CC lib/ftl/ftl_rq.o 00:06:15.582 CC lib/ftl/ftl_reloc.o 00:06:15.582 CC lib/ftl/ftl_l2p_cache.o 00:06:15.582 CC lib/ftl/ftl_p2l.o 00:06:15.582 CC lib/ftl/ftl_p2l_log.o 00:06:15.582 LIB libspdk_lvol.a 00:06:15.582 CC lib/ftl/mngt/ftl_mngt.o 00:06:15.582 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:15.582 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:15.582 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:15.582 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:15.582 SO libspdk_lvol.so.10.0 00:06:15.844 SYMLINK libspdk_lvol.so 00:06:15.844 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:16.112 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:16.112 CC lib/ftl/utils/ftl_conf.o 00:06:16.112 CC lib/ftl/utils/ftl_md.o 00:06:16.112 CC lib/ftl/utils/ftl_mempool.o 00:06:16.112 CC lib/ftl/utils/ftl_bitmap.o 00:06:16.112 CC lib/ftl/utils/ftl_property.o 00:06:16.112 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:16.112 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:16.112 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:16.112 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:16.112 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:16.112 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:16.371 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:16.371 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:16.371 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:16.371 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:16.371 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:16.371 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:16.371 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:16.371 CC lib/ftl/base/ftl_base_dev.o 00:06:16.371 CC lib/ftl/base/ftl_base_bdev.o 00:06:16.371 CC lib/ftl/ftl_trace.o 00:06:16.628 LIB libspdk_nbd.a 00:06:16.628 SO libspdk_nbd.so.7.0 00:06:16.629 LIB libspdk_scsi.a 00:06:16.629 SYMLINK libspdk_nbd.so 00:06:16.629 SO libspdk_scsi.so.9.0 00:06:16.629 SYMLINK libspdk_scsi.so 00:06:16.885 LIB libspdk_ublk.a 00:06:16.885 SO libspdk_ublk.so.3.0 00:06:16.885 CC lib/iscsi/conn.o 00:06:16.885 CC lib/vhost/vhost.o 00:06:16.885 CC lib/iscsi/init_grp.o 00:06:16.885 CC lib/vhost/vhost_rpc.o 00:06:16.885 CC lib/iscsi/iscsi.o 00:06:16.885 CC lib/vhost/vhost_scsi.o 00:06:16.885 CC lib/iscsi/param.o 00:06:16.885 CC lib/vhost/vhost_blk.o 00:06:16.885 CC lib/vhost/rte_vhost_user.o 00:06:16.885 CC lib/iscsi/tgt_node.o 00:06:16.885 CC lib/iscsi/portal_grp.o 00:06:16.885 CC lib/iscsi/iscsi_subsystem.o 00:06:16.885 CC lib/iscsi/iscsi_rpc.o 00:06:16.885 CC lib/iscsi/task.o 00:06:16.885 SYMLINK libspdk_ublk.so 00:06:17.143 LIB libspdk_ftl.a 00:06:17.401 SO libspdk_ftl.so.9.0 00:06:17.659 SYMLINK libspdk_ftl.so 00:06:18.225 LIB libspdk_vhost.a 00:06:18.225 SO libspdk_vhost.so.8.0 00:06:18.225 LIB libspdk_nvmf.a 00:06:18.225 SYMLINK libspdk_vhost.so 00:06:18.225 SO libspdk_nvmf.so.19.0 00:06:18.483 LIB libspdk_iscsi.a 00:06:18.483 SO libspdk_iscsi.so.8.0 00:06:18.483 SYMLINK libspdk_nvmf.so 00:06:18.483 SYMLINK libspdk_iscsi.so 00:06:18.741 CC module/env_dpdk/env_dpdk_rpc.o 00:06:18.741 CC module/vfu_device/vfu_virtio.o 00:06:18.741 CC module/vfu_device/vfu_virtio_blk.o 00:06:18.741 CC module/vfu_device/vfu_virtio_scsi.o 00:06:18.741 CC module/vfu_device/vfu_virtio_rpc.o 00:06:18.741 CC module/vfu_device/vfu_virtio_fs.o 00:06:18.999 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:18.999 CC module/keyring/file/keyring.o 00:06:18.999 CC module/keyring/file/keyring_rpc.o 00:06:18.999 CC module/accel/ioat/accel_ioat.o 00:06:18.999 CC module/sock/posix/posix.o 00:06:18.999 CC module/accel/ioat/accel_ioat_rpc.o 00:06:18.999 CC module/keyring/linux/keyring.o 00:06:18.999 CC module/fsdev/aio/fsdev_aio.o 00:06:18.999 CC module/accel/error/accel_error.o 00:06:18.999 CC module/accel/dsa/accel_dsa.o 00:06:18.999 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:18.999 CC module/accel/error/accel_error_rpc.o 00:06:18.999 CC module/accel/dsa/accel_dsa_rpc.o 00:06:18.999 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:18.999 CC module/keyring/linux/keyring_rpc.o 00:06:18.999 CC module/fsdev/aio/linux_aio_mgr.o 00:06:18.999 CC module/accel/iaa/accel_iaa.o 00:06:18.999 CC module/blob/bdev/blob_bdev.o 00:06:18.999 CC module/accel/iaa/accel_iaa_rpc.o 00:06:18.999 CC module/scheduler/gscheduler/gscheduler.o 00:06:18.999 LIB libspdk_env_dpdk_rpc.a 00:06:18.999 SO libspdk_env_dpdk_rpc.so.6.0 00:06:18.999 SYMLINK libspdk_env_dpdk_rpc.so 00:06:18.999 LIB libspdk_keyring_linux.a 00:06:18.999 LIB libspdk_scheduler_gscheduler.a 00:06:18.999 LIB libspdk_scheduler_dpdk_governor.a 00:06:18.999 SO libspdk_keyring_linux.so.1.0 00:06:19.258 SO libspdk_scheduler_gscheduler.so.4.0 00:06:19.258 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:19.258 LIB libspdk_accel_error.a 00:06:19.258 LIB libspdk_accel_ioat.a 00:06:19.258 SO libspdk_accel_error.so.2.0 00:06:19.258 LIB libspdk_scheduler_dynamic.a 00:06:19.258 SO libspdk_accel_ioat.so.6.0 00:06:19.258 LIB libspdk_accel_iaa.a 00:06:19.258 SYMLINK libspdk_keyring_linux.so 00:06:19.258 SYMLINK libspdk_scheduler_gscheduler.so 00:06:19.258 LIB libspdk_keyring_file.a 00:06:19.258 SO libspdk_scheduler_dynamic.so.4.0 00:06:19.258 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:19.258 SO libspdk_accel_iaa.so.3.0 00:06:19.258 SO libspdk_keyring_file.so.2.0 00:06:19.258 SYMLINK libspdk_accel_error.so 00:06:19.258 SYMLINK libspdk_accel_ioat.so 00:06:19.258 SYMLINK libspdk_scheduler_dynamic.so 00:06:19.258 SYMLINK libspdk_accel_iaa.so 00:06:19.258 LIB libspdk_blob_bdev.a 00:06:19.258 LIB libspdk_accel_dsa.a 00:06:19.258 SYMLINK libspdk_keyring_file.so 00:06:19.258 SO libspdk_blob_bdev.so.11.0 00:06:19.258 SO libspdk_accel_dsa.so.5.0 00:06:19.258 SYMLINK libspdk_blob_bdev.so 00:06:19.258 SYMLINK libspdk_accel_dsa.so 00:06:19.519 LIB libspdk_vfu_device.a 00:06:19.519 SO libspdk_vfu_device.so.3.0 00:06:19.519 CC module/bdev/delay/vbdev_delay.o 00:06:19.519 CC module/bdev/gpt/gpt.o 00:06:19.519 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:19.519 CC module/bdev/gpt/vbdev_gpt.o 00:06:19.519 CC module/bdev/nvme/bdev_nvme.o 00:06:19.519 CC module/bdev/ftl/bdev_ftl.o 00:06:19.519 CC module/bdev/lvol/vbdev_lvol.o 00:06:19.519 CC module/bdev/error/vbdev_error_rpc.o 00:06:19.519 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:19.519 CC module/bdev/error/vbdev_error.o 00:06:19.519 CC module/blobfs/bdev/blobfs_bdev.o 00:06:19.519 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:19.519 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:19.519 CC module/bdev/nvme/nvme_rpc.o 00:06:19.519 CC module/bdev/null/bdev_null.o 00:06:19.519 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:19.519 CC module/bdev/malloc/bdev_malloc.o 00:06:19.519 CC module/bdev/nvme/bdev_mdns_client.o 00:06:19.519 CC module/bdev/passthru/vbdev_passthru.o 00:06:19.519 CC module/bdev/nvme/vbdev_opal.o 00:06:19.519 CC module/bdev/split/vbdev_split.o 00:06:19.519 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:19.519 CC module/bdev/null/bdev_null_rpc.o 00:06:19.519 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:19.519 CC module/bdev/split/vbdev_split_rpc.o 00:06:19.519 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:19.519 CC module/bdev/iscsi/bdev_iscsi.o 00:06:19.519 CC module/bdev/raid/bdev_raid.o 00:06:19.519 CC module/bdev/raid/bdev_raid_rpc.o 00:06:19.519 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:19.519 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:19.520 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:19.520 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:19.520 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:19.520 CC module/bdev/raid/bdev_raid_sb.o 00:06:19.520 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:19.520 CC module/bdev/aio/bdev_aio.o 00:06:19.520 CC module/bdev/raid/raid0.o 00:06:19.520 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:19.520 CC module/bdev/aio/bdev_aio_rpc.o 00:06:19.520 CC module/bdev/raid/raid1.o 00:06:19.520 CC module/bdev/raid/concat.o 00:06:19.785 SYMLINK libspdk_vfu_device.so 00:06:19.785 LIB libspdk_sock_posix.a 00:06:19.785 SO libspdk_sock_posix.so.6.0 00:06:19.785 LIB libspdk_fsdev_aio.a 00:06:19.785 SO libspdk_fsdev_aio.so.1.0 00:06:20.043 SYMLINK libspdk_sock_posix.so 00:06:20.043 SYMLINK libspdk_fsdev_aio.so 00:06:20.043 LIB libspdk_blobfs_bdev.a 00:06:20.043 SO libspdk_blobfs_bdev.so.6.0 00:06:20.043 LIB libspdk_bdev_split.a 00:06:20.043 LIB libspdk_bdev_error.a 00:06:20.043 SO libspdk_bdev_split.so.6.0 00:06:20.043 SO libspdk_bdev_error.so.6.0 00:06:20.043 SYMLINK libspdk_blobfs_bdev.so 00:06:20.043 LIB libspdk_bdev_gpt.a 00:06:20.043 LIB libspdk_bdev_null.a 00:06:20.043 LIB libspdk_bdev_ftl.a 00:06:20.043 SO libspdk_bdev_gpt.so.6.0 00:06:20.043 LIB libspdk_bdev_passthru.a 00:06:20.043 SYMLINK libspdk_bdev_split.so 00:06:20.043 SYMLINK libspdk_bdev_error.so 00:06:20.043 SO libspdk_bdev_null.so.6.0 00:06:20.043 SO libspdk_bdev_ftl.so.6.0 00:06:20.043 SO libspdk_bdev_passthru.so.6.0 00:06:20.043 LIB libspdk_bdev_zone_block.a 00:06:20.043 SYMLINK libspdk_bdev_gpt.so 00:06:20.043 SO libspdk_bdev_zone_block.so.6.0 00:06:20.301 SYMLINK libspdk_bdev_null.so 00:06:20.301 SYMLINK libspdk_bdev_ftl.so 00:06:20.301 SYMLINK libspdk_bdev_passthru.so 00:06:20.301 LIB libspdk_bdev_iscsi.a 00:06:20.301 LIB libspdk_bdev_malloc.a 00:06:20.301 LIB libspdk_bdev_aio.a 00:06:20.301 SO libspdk_bdev_malloc.so.6.0 00:06:20.301 SO libspdk_bdev_iscsi.so.6.0 00:06:20.301 SYMLINK libspdk_bdev_zone_block.so 00:06:20.301 SO libspdk_bdev_aio.so.6.0 00:06:20.301 LIB libspdk_bdev_delay.a 00:06:20.301 LIB libspdk_bdev_virtio.a 00:06:20.302 SO libspdk_bdev_delay.so.6.0 00:06:20.302 SO libspdk_bdev_virtio.so.6.0 00:06:20.302 SYMLINK libspdk_bdev_malloc.so 00:06:20.302 SYMLINK libspdk_bdev_iscsi.so 00:06:20.302 SYMLINK libspdk_bdev_aio.so 00:06:20.302 SYMLINK libspdk_bdev_delay.so 00:06:20.302 LIB libspdk_bdev_lvol.a 00:06:20.302 SYMLINK libspdk_bdev_virtio.so 00:06:20.302 SO libspdk_bdev_lvol.so.6.0 00:06:20.302 SYMLINK libspdk_bdev_lvol.so 00:06:20.868 LIB libspdk_bdev_raid.a 00:06:20.868 SO libspdk_bdev_raid.so.6.0 00:06:20.868 SYMLINK libspdk_bdev_raid.so 00:06:22.247 LIB libspdk_bdev_nvme.a 00:06:22.247 SO libspdk_bdev_nvme.so.7.0 00:06:22.247 SYMLINK libspdk_bdev_nvme.so 00:06:22.505 CC module/event/subsystems/sock/sock.o 00:06:22.505 CC module/event/subsystems/iobuf/iobuf.o 00:06:22.505 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:22.505 CC module/event/subsystems/vmd/vmd.o 00:06:22.505 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:22.505 CC module/event/subsystems/scheduler/scheduler.o 00:06:22.505 CC module/event/subsystems/fsdev/fsdev.o 00:06:22.505 CC module/event/subsystems/keyring/keyring.o 00:06:22.505 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:22.505 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:22.763 LIB libspdk_event_keyring.a 00:06:22.763 LIB libspdk_event_fsdev.a 00:06:22.763 LIB libspdk_event_vhost_blk.a 00:06:22.763 LIB libspdk_event_vfu_tgt.a 00:06:22.763 LIB libspdk_event_scheduler.a 00:06:22.763 LIB libspdk_event_sock.a 00:06:22.763 LIB libspdk_event_vmd.a 00:06:22.763 SO libspdk_event_keyring.so.1.0 00:06:22.763 SO libspdk_event_fsdev.so.1.0 00:06:22.763 LIB libspdk_event_iobuf.a 00:06:22.763 SO libspdk_event_vhost_blk.so.3.0 00:06:22.763 SO libspdk_event_vfu_tgt.so.3.0 00:06:22.763 SO libspdk_event_sock.so.5.0 00:06:22.763 SO libspdk_event_scheduler.so.4.0 00:06:22.763 SO libspdk_event_vmd.so.6.0 00:06:22.763 SO libspdk_event_iobuf.so.3.0 00:06:22.763 SYMLINK libspdk_event_fsdev.so 00:06:22.763 SYMLINK libspdk_event_keyring.so 00:06:22.763 SYMLINK libspdk_event_vhost_blk.so 00:06:22.763 SYMLINK libspdk_event_vfu_tgt.so 00:06:22.763 SYMLINK libspdk_event_sock.so 00:06:22.763 SYMLINK libspdk_event_scheduler.so 00:06:22.763 SYMLINK libspdk_event_vmd.so 00:06:22.763 SYMLINK libspdk_event_iobuf.so 00:06:23.021 CC module/event/subsystems/accel/accel.o 00:06:23.021 LIB libspdk_event_accel.a 00:06:23.021 SO libspdk_event_accel.so.6.0 00:06:23.021 SYMLINK libspdk_event_accel.so 00:06:23.279 CC module/event/subsystems/bdev/bdev.o 00:06:23.538 LIB libspdk_event_bdev.a 00:06:23.538 SO libspdk_event_bdev.so.6.0 00:06:23.538 SYMLINK libspdk_event_bdev.so 00:06:23.796 CC module/event/subsystems/ublk/ublk.o 00:06:23.796 CC module/event/subsystems/scsi/scsi.o 00:06:23.796 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:23.796 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:23.796 CC module/event/subsystems/nbd/nbd.o 00:06:23.796 LIB libspdk_event_ublk.a 00:06:23.796 LIB libspdk_event_nbd.a 00:06:23.796 LIB libspdk_event_scsi.a 00:06:23.796 SO libspdk_event_nbd.so.6.0 00:06:23.796 SO libspdk_event_ublk.so.3.0 00:06:23.796 SO libspdk_event_scsi.so.6.0 00:06:24.054 SYMLINK libspdk_event_nbd.so 00:06:24.054 SYMLINK libspdk_event_ublk.so 00:06:24.054 SYMLINK libspdk_event_scsi.so 00:06:24.054 LIB libspdk_event_nvmf.a 00:06:24.054 SO libspdk_event_nvmf.so.6.0 00:06:24.054 SYMLINK libspdk_event_nvmf.so 00:06:24.054 CC module/event/subsystems/iscsi/iscsi.o 00:06:24.054 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:24.312 LIB libspdk_event_vhost_scsi.a 00:06:24.312 LIB libspdk_event_iscsi.a 00:06:24.312 SO libspdk_event_vhost_scsi.so.3.0 00:06:24.312 SO libspdk_event_iscsi.so.6.0 00:06:24.312 SYMLINK libspdk_event_vhost_scsi.so 00:06:24.312 SYMLINK libspdk_event_iscsi.so 00:06:24.571 SO libspdk.so.6.0 00:06:24.571 SYMLINK libspdk.so 00:06:24.571 CC app/trace_record/trace_record.o 00:06:24.571 CC app/spdk_nvme_discover/discovery_aer.o 00:06:24.571 CXX app/trace/trace.o 00:06:24.571 CC app/spdk_top/spdk_top.o 00:06:24.571 CC app/spdk_lspci/spdk_lspci.o 00:06:24.571 CC app/spdk_nvme_perf/perf.o 00:06:24.571 CC test/rpc_client/rpc_client_test.o 00:06:24.571 TEST_HEADER include/spdk/accel.h 00:06:24.571 TEST_HEADER include/spdk/accel_module.h 00:06:24.571 TEST_HEADER include/spdk/assert.h 00:06:24.571 TEST_HEADER include/spdk/barrier.h 00:06:24.571 TEST_HEADER include/spdk/base64.h 00:06:24.571 TEST_HEADER include/spdk/bdev.h 00:06:24.571 TEST_HEADER include/spdk/bdev_module.h 00:06:24.571 CC app/spdk_nvme_identify/identify.o 00:06:24.571 TEST_HEADER include/spdk/bdev_zone.h 00:06:24.571 TEST_HEADER include/spdk/bit_array.h 00:06:24.571 TEST_HEADER include/spdk/bit_pool.h 00:06:24.571 TEST_HEADER include/spdk/blob_bdev.h 00:06:24.571 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:24.571 TEST_HEADER include/spdk/blobfs.h 00:06:24.571 TEST_HEADER include/spdk/blob.h 00:06:24.571 TEST_HEADER include/spdk/conf.h 00:06:24.571 TEST_HEADER include/spdk/config.h 00:06:24.571 TEST_HEADER include/spdk/cpuset.h 00:06:24.571 TEST_HEADER include/spdk/crc16.h 00:06:24.571 TEST_HEADER include/spdk/crc32.h 00:06:24.571 TEST_HEADER include/spdk/crc64.h 00:06:24.571 TEST_HEADER include/spdk/dif.h 00:06:24.571 TEST_HEADER include/spdk/dma.h 00:06:24.571 TEST_HEADER include/spdk/endian.h 00:06:24.571 TEST_HEADER include/spdk/env_dpdk.h 00:06:24.571 TEST_HEADER include/spdk/env.h 00:06:24.571 TEST_HEADER include/spdk/event.h 00:06:24.571 TEST_HEADER include/spdk/fd_group.h 00:06:24.571 TEST_HEADER include/spdk/fd.h 00:06:24.571 TEST_HEADER include/spdk/file.h 00:06:24.571 TEST_HEADER include/spdk/fsdev.h 00:06:24.571 TEST_HEADER include/spdk/fsdev_module.h 00:06:24.571 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:24.571 TEST_HEADER include/spdk/ftl.h 00:06:24.571 TEST_HEADER include/spdk/gpt_spec.h 00:06:24.571 TEST_HEADER include/spdk/hexlify.h 00:06:24.572 TEST_HEADER include/spdk/histogram_data.h 00:06:24.572 TEST_HEADER include/spdk/idxd.h 00:06:24.572 TEST_HEADER include/spdk/idxd_spec.h 00:06:24.572 TEST_HEADER include/spdk/init.h 00:06:24.572 TEST_HEADER include/spdk/ioat_spec.h 00:06:24.572 TEST_HEADER include/spdk/ioat.h 00:06:24.572 TEST_HEADER include/spdk/iscsi_spec.h 00:06:24.572 TEST_HEADER include/spdk/json.h 00:06:24.572 TEST_HEADER include/spdk/jsonrpc.h 00:06:24.572 TEST_HEADER include/spdk/keyring.h 00:06:24.572 TEST_HEADER include/spdk/likely.h 00:06:24.572 TEST_HEADER include/spdk/keyring_module.h 00:06:24.572 TEST_HEADER include/spdk/log.h 00:06:24.572 TEST_HEADER include/spdk/lvol.h 00:06:24.572 TEST_HEADER include/spdk/md5.h 00:06:24.572 TEST_HEADER include/spdk/memory.h 00:06:24.572 TEST_HEADER include/spdk/mmio.h 00:06:24.572 TEST_HEADER include/spdk/nbd.h 00:06:24.572 TEST_HEADER include/spdk/net.h 00:06:24.572 TEST_HEADER include/spdk/notify.h 00:06:24.572 TEST_HEADER include/spdk/nvme.h 00:06:24.836 TEST_HEADER include/spdk/nvme_intel.h 00:06:24.836 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:24.836 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:24.836 TEST_HEADER include/spdk/nvme_spec.h 00:06:24.836 TEST_HEADER include/spdk/nvme_zns.h 00:06:24.836 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:24.836 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:24.836 TEST_HEADER include/spdk/nvmf.h 00:06:24.836 TEST_HEADER include/spdk/nvmf_spec.h 00:06:24.836 TEST_HEADER include/spdk/nvmf_transport.h 00:06:24.836 TEST_HEADER include/spdk/opal.h 00:06:24.836 TEST_HEADER include/spdk/opal_spec.h 00:06:24.836 TEST_HEADER include/spdk/pci_ids.h 00:06:24.836 TEST_HEADER include/spdk/pipe.h 00:06:24.836 TEST_HEADER include/spdk/queue.h 00:06:24.836 TEST_HEADER include/spdk/reduce.h 00:06:24.836 TEST_HEADER include/spdk/rpc.h 00:06:24.836 TEST_HEADER include/spdk/scheduler.h 00:06:24.836 TEST_HEADER include/spdk/scsi.h 00:06:24.836 TEST_HEADER include/spdk/scsi_spec.h 00:06:24.836 TEST_HEADER include/spdk/sock.h 00:06:24.836 TEST_HEADER include/spdk/stdinc.h 00:06:24.836 TEST_HEADER include/spdk/string.h 00:06:24.836 TEST_HEADER include/spdk/thread.h 00:06:24.836 TEST_HEADER include/spdk/trace.h 00:06:24.836 TEST_HEADER include/spdk/trace_parser.h 00:06:24.836 TEST_HEADER include/spdk/tree.h 00:06:24.836 TEST_HEADER include/spdk/ublk.h 00:06:24.836 TEST_HEADER include/spdk/util.h 00:06:24.836 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:24.836 TEST_HEADER include/spdk/version.h 00:06:24.836 TEST_HEADER include/spdk/uuid.h 00:06:24.836 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:24.836 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:24.836 TEST_HEADER include/spdk/vhost.h 00:06:24.836 TEST_HEADER include/spdk/vmd.h 00:06:24.836 TEST_HEADER include/spdk/xor.h 00:06:24.836 TEST_HEADER include/spdk/zipf.h 00:06:24.836 CXX test/cpp_headers/accel.o 00:06:24.836 CXX test/cpp_headers/accel_module.o 00:06:24.836 CXX test/cpp_headers/assert.o 00:06:24.836 CXX test/cpp_headers/barrier.o 00:06:24.836 CXX test/cpp_headers/base64.o 00:06:24.836 CC app/spdk_dd/spdk_dd.o 00:06:24.836 CXX test/cpp_headers/bdev.o 00:06:24.836 CXX test/cpp_headers/bdev_module.o 00:06:24.836 CXX test/cpp_headers/bdev_zone.o 00:06:24.836 CXX test/cpp_headers/bit_array.o 00:06:24.836 CXX test/cpp_headers/bit_pool.o 00:06:24.836 CXX test/cpp_headers/blob_bdev.o 00:06:24.836 CXX test/cpp_headers/blobfs_bdev.o 00:06:24.836 CXX test/cpp_headers/blobfs.o 00:06:24.836 CXX test/cpp_headers/blob.o 00:06:24.836 CXX test/cpp_headers/conf.o 00:06:24.836 CXX test/cpp_headers/config.o 00:06:24.836 CXX test/cpp_headers/cpuset.o 00:06:24.836 CXX test/cpp_headers/crc16.o 00:06:24.836 CC app/iscsi_tgt/iscsi_tgt.o 00:06:24.836 CC app/nvmf_tgt/nvmf_main.o 00:06:24.836 CXX test/cpp_headers/crc32.o 00:06:24.836 CC examples/ioat/perf/perf.o 00:06:24.836 CC examples/ioat/verify/verify.o 00:06:24.836 CC app/spdk_tgt/spdk_tgt.o 00:06:24.836 CC test/thread/poller_perf/poller_perf.o 00:06:24.836 CC test/env/memory/memory_ut.o 00:06:24.836 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:24.836 CC examples/util/zipf/zipf.o 00:06:24.836 CC test/env/vtophys/vtophys.o 00:06:24.836 CC test/app/jsoncat/jsoncat.o 00:06:24.836 CC app/fio/nvme/fio_plugin.o 00:06:24.836 CC test/app/histogram_perf/histogram_perf.o 00:06:24.836 CC test/env/pci/pci_ut.o 00:06:24.836 CC test/app/stub/stub.o 00:06:24.836 CC test/dma/test_dma/test_dma.o 00:06:24.836 CC app/fio/bdev/fio_plugin.o 00:06:24.836 CC test/app/bdev_svc/bdev_svc.o 00:06:24.836 LINK spdk_lspci 00:06:25.098 CC test/env/mem_callbacks/mem_callbacks.o 00:06:25.098 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:25.098 LINK rpc_client_test 00:06:25.098 LINK spdk_nvme_discover 00:06:25.098 LINK interrupt_tgt 00:06:25.098 LINK poller_perf 00:06:25.098 LINK zipf 00:06:25.098 LINK jsoncat 00:06:25.098 LINK vtophys 00:06:25.098 CXX test/cpp_headers/crc64.o 00:06:25.098 LINK histogram_perf 00:06:25.098 LINK env_dpdk_post_init 00:06:25.098 CXX test/cpp_headers/dif.o 00:06:25.098 CXX test/cpp_headers/dma.o 00:06:25.098 CXX test/cpp_headers/endian.o 00:06:25.098 LINK spdk_trace_record 00:06:25.098 LINK nvmf_tgt 00:06:25.098 CXX test/cpp_headers/env_dpdk.o 00:06:25.098 CXX test/cpp_headers/env.o 00:06:25.098 CXX test/cpp_headers/event.o 00:06:25.098 CXX test/cpp_headers/fd_group.o 00:06:25.360 CXX test/cpp_headers/fd.o 00:06:25.360 CXX test/cpp_headers/file.o 00:06:25.360 CXX test/cpp_headers/fsdev.o 00:06:25.360 CXX test/cpp_headers/fsdev_module.o 00:06:25.360 CXX test/cpp_headers/ftl.o 00:06:25.360 LINK stub 00:06:25.360 LINK iscsi_tgt 00:06:25.360 CXX test/cpp_headers/fuse_dispatcher.o 00:06:25.360 LINK verify 00:06:25.360 CXX test/cpp_headers/gpt_spec.o 00:06:25.360 CXX test/cpp_headers/hexlify.o 00:06:25.360 LINK spdk_tgt 00:06:25.360 LINK ioat_perf 00:06:25.360 LINK bdev_svc 00:06:25.360 CXX test/cpp_headers/histogram_data.o 00:06:25.360 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:25.360 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:25.360 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:25.360 CXX test/cpp_headers/idxd.o 00:06:25.622 CXX test/cpp_headers/idxd_spec.o 00:06:25.622 CXX test/cpp_headers/init.o 00:06:25.622 CXX test/cpp_headers/ioat.o 00:06:25.622 CXX test/cpp_headers/ioat_spec.o 00:06:25.622 LINK spdk_dd 00:06:25.622 CXX test/cpp_headers/iscsi_spec.o 00:06:25.622 CXX test/cpp_headers/json.o 00:06:25.622 CXX test/cpp_headers/jsonrpc.o 00:06:25.622 CXX test/cpp_headers/keyring.o 00:06:25.622 LINK spdk_trace 00:06:25.622 CXX test/cpp_headers/keyring_module.o 00:06:25.622 CXX test/cpp_headers/likely.o 00:06:25.622 LINK pci_ut 00:06:25.622 CXX test/cpp_headers/log.o 00:06:25.622 CXX test/cpp_headers/lvol.o 00:06:25.622 CXX test/cpp_headers/md5.o 00:06:25.622 CXX test/cpp_headers/memory.o 00:06:25.622 CXX test/cpp_headers/mmio.o 00:06:25.622 CXX test/cpp_headers/nbd.o 00:06:25.622 CXX test/cpp_headers/net.o 00:06:25.622 CXX test/cpp_headers/notify.o 00:06:25.622 CXX test/cpp_headers/nvme.o 00:06:25.622 CXX test/cpp_headers/nvme_intel.o 00:06:25.622 CXX test/cpp_headers/nvme_ocssd.o 00:06:25.622 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:25.886 CXX test/cpp_headers/nvme_spec.o 00:06:25.886 CXX test/cpp_headers/nvme_zns.o 00:06:25.886 LINK nvme_fuzz 00:06:25.886 CC test/event/reactor_perf/reactor_perf.o 00:06:25.886 CC test/event/reactor/reactor.o 00:06:25.886 CC test/event/event_perf/event_perf.o 00:06:25.886 CC test/event/app_repeat/app_repeat.o 00:06:25.886 CXX test/cpp_headers/nvmf_cmd.o 00:06:25.886 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:25.886 CXX test/cpp_headers/nvmf.o 00:06:25.886 CXX test/cpp_headers/nvmf_spec.o 00:06:25.886 CC examples/sock/hello_world/hello_sock.o 00:06:25.886 CXX test/cpp_headers/nvmf_transport.o 00:06:25.886 CC examples/thread/thread/thread_ex.o 00:06:25.886 CC examples/vmd/lsvmd/lsvmd.o 00:06:25.886 CC test/event/scheduler/scheduler.o 00:06:25.886 CC examples/vmd/led/led.o 00:06:25.886 CC examples/idxd/perf/perf.o 00:06:25.886 CXX test/cpp_headers/opal.o 00:06:25.886 LINK spdk_nvme 00:06:25.886 LINK test_dma 00:06:25.886 LINK spdk_bdev 00:06:25.886 CXX test/cpp_headers/opal_spec.o 00:06:25.886 CXX test/cpp_headers/pci_ids.o 00:06:26.146 CXX test/cpp_headers/pipe.o 00:06:26.146 CXX test/cpp_headers/queue.o 00:06:26.146 CXX test/cpp_headers/reduce.o 00:06:26.146 CXX test/cpp_headers/rpc.o 00:06:26.146 CXX test/cpp_headers/scheduler.o 00:06:26.146 CXX test/cpp_headers/scsi.o 00:06:26.146 CXX test/cpp_headers/scsi_spec.o 00:06:26.146 CXX test/cpp_headers/sock.o 00:06:26.147 CXX test/cpp_headers/stdinc.o 00:06:26.147 CXX test/cpp_headers/string.o 00:06:26.147 CXX test/cpp_headers/thread.o 00:06:26.147 CXX test/cpp_headers/trace.o 00:06:26.147 CXX test/cpp_headers/trace_parser.o 00:06:26.147 LINK reactor 00:06:26.147 CXX test/cpp_headers/tree.o 00:06:26.147 CXX test/cpp_headers/ublk.o 00:06:26.147 CXX test/cpp_headers/util.o 00:06:26.147 LINK reactor_perf 00:06:26.147 LINK event_perf 00:06:26.147 LINK mem_callbacks 00:06:26.147 CXX test/cpp_headers/uuid.o 00:06:26.147 CXX test/cpp_headers/version.o 00:06:26.147 CXX test/cpp_headers/vfio_user_pci.o 00:06:26.147 LINK spdk_nvme_perf 00:06:26.147 CXX test/cpp_headers/vfio_user_spec.o 00:06:26.147 CXX test/cpp_headers/vhost.o 00:06:26.147 CXX test/cpp_headers/vmd.o 00:06:26.147 CXX test/cpp_headers/xor.o 00:06:26.147 LINK lsvmd 00:06:26.147 LINK app_repeat 00:06:26.147 LINK vhost_fuzz 00:06:26.147 CXX test/cpp_headers/zipf.o 00:06:26.408 CC app/vhost/vhost.o 00:06:26.408 LINK led 00:06:26.408 LINK spdk_nvme_identify 00:06:26.408 LINK spdk_top 00:06:26.408 LINK hello_sock 00:06:26.408 LINK scheduler 00:06:26.408 LINK thread 00:06:26.669 CC test/nvme/overhead/overhead.o 00:06:26.669 CC test/nvme/err_injection/err_injection.o 00:06:26.669 CC test/nvme/aer/aer.o 00:06:26.669 CC test/nvme/simple_copy/simple_copy.o 00:06:26.669 CC test/nvme/startup/startup.o 00:06:26.669 CC test/nvme/fdp/fdp.o 00:06:26.669 CC test/nvme/sgl/sgl.o 00:06:26.669 CC test/nvme/e2edp/nvme_dp.o 00:06:26.669 CC test/nvme/compliance/nvme_compliance.o 00:06:26.669 CC test/nvme/connect_stress/connect_stress.o 00:06:26.669 CC test/nvme/reserve/reserve.o 00:06:26.669 CC test/nvme/reset/reset.o 00:06:26.669 CC test/nvme/fused_ordering/fused_ordering.o 00:06:26.669 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:26.669 CC test/nvme/boot_partition/boot_partition.o 00:06:26.669 CC test/nvme/cuse/cuse.o 00:06:26.669 LINK idxd_perf 00:06:26.669 LINK vhost 00:06:26.669 CC test/blobfs/mkfs/mkfs.o 00:06:26.669 CC test/accel/dif/dif.o 00:06:26.669 CC test/lvol/esnap/esnap.o 00:06:26.928 LINK startup 00:06:26.928 LINK connect_stress 00:06:26.928 CC examples/nvme/arbitration/arbitration.o 00:06:26.928 CC examples/nvme/hello_world/hello_world.o 00:06:26.928 CC examples/nvme/hotplug/hotplug.o 00:06:26.928 CC examples/nvme/reconnect/reconnect.o 00:06:26.928 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:26.928 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:26.928 LINK boot_partition 00:06:26.928 CC examples/nvme/abort/abort.o 00:06:26.928 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:26.928 LINK simple_copy 00:06:26.928 LINK fused_ordering 00:06:26.928 CC examples/accel/perf/accel_perf.o 00:06:26.928 LINK nvme_dp 00:06:26.928 LINK doorbell_aers 00:06:26.928 LINK aer 00:06:26.928 LINK memory_ut 00:06:26.928 LINK err_injection 00:06:26.928 LINK mkfs 00:06:26.928 LINK reset 00:06:26.928 LINK sgl 00:06:26.928 LINK reserve 00:06:26.928 LINK fdp 00:06:27.187 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:27.187 CC examples/blob/cli/blobcli.o 00:06:27.187 CC examples/blob/hello_world/hello_blob.o 00:06:27.187 LINK overhead 00:06:27.187 LINK nvme_compliance 00:06:27.187 LINK hello_world 00:06:27.187 LINK cmb_copy 00:06:27.187 LINK pmr_persistence 00:06:27.445 LINK hotplug 00:06:27.445 LINK arbitration 00:06:27.445 LINK reconnect 00:06:27.445 LINK abort 00:06:27.445 LINK hello_fsdev 00:06:27.445 LINK hello_blob 00:06:27.445 LINK dif 00:06:27.445 LINK accel_perf 00:06:27.704 LINK nvme_manage 00:06:27.704 LINK blobcli 00:06:27.962 CC examples/bdev/hello_world/hello_bdev.o 00:06:27.962 CC test/bdev/bdevio/bdevio.o 00:06:27.962 CC examples/bdev/bdevperf/bdevperf.o 00:06:27.962 LINK iscsi_fuzz 00:06:28.220 LINK hello_bdev 00:06:28.220 LINK cuse 00:06:28.220 LINK bdevio 00:06:28.861 LINK bdevperf 00:06:29.119 CC examples/nvmf/nvmf/nvmf.o 00:06:29.377 LINK nvmf 00:06:31.910 LINK esnap 00:06:32.170 00:06:32.170 real 1m10.154s 00:06:32.170 user 11m54.557s 00:06:32.170 sys 2m37.801s 00:06:32.170 13:17:13 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:06:32.170 13:17:13 make -- common/autotest_common.sh@10 -- $ set +x 00:06:32.170 ************************************ 00:06:32.170 END TEST make 00:06:32.170 ************************************ 00:06:32.170 13:17:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:32.170 13:17:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:32.170 13:17:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:32.170 13:17:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.170 13:17:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:32.170 13:17:13 -- pm/common@44 -- $ pid=1612169 00:06:32.170 13:17:13 -- pm/common@50 -- $ kill -TERM 1612169 00:06:32.170 13:17:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.170 13:17:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:32.170 13:17:13 -- pm/common@44 -- $ pid=1612171 00:06:32.170 13:17:13 -- pm/common@50 -- $ kill -TERM 1612171 00:06:32.170 13:17:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.170 13:17:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:32.170 13:17:13 -- pm/common@44 -- $ pid=1612173 00:06:32.170 13:17:13 -- pm/common@50 -- $ kill -TERM 1612173 00:06:32.170 13:17:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.170 13:17:13 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:32.170 13:17:13 -- pm/common@44 -- $ pid=1612201 00:06:32.170 13:17:13 -- pm/common@50 -- $ sudo -E kill -TERM 1612201 00:06:32.170 13:17:13 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:32.170 13:17:13 -- common/autotest_common.sh@1681 -- # lcov --version 00:06:32.170 13:17:13 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:32.429 13:17:13 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:32.429 13:17:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.429 13:17:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.429 13:17:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.429 13:17:13 -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.429 13:17:13 -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.429 13:17:13 -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.429 13:17:13 -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.429 13:17:13 -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.429 13:17:13 -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.429 13:17:13 -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.429 13:17:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.429 13:17:13 -- scripts/common.sh@344 -- # case "$op" in 00:06:32.429 13:17:13 -- scripts/common.sh@345 -- # : 1 00:06:32.429 13:17:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.429 13:17:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.429 13:17:13 -- scripts/common.sh@365 -- # decimal 1 00:06:32.429 13:17:13 -- scripts/common.sh@353 -- # local d=1 00:06:32.429 13:17:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.429 13:17:13 -- scripts/common.sh@355 -- # echo 1 00:06:32.429 13:17:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.429 13:17:13 -- scripts/common.sh@366 -- # decimal 2 00:06:32.429 13:17:13 -- scripts/common.sh@353 -- # local d=2 00:06:32.429 13:17:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.429 13:17:13 -- scripts/common.sh@355 -- # echo 2 00:06:32.429 13:17:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.429 13:17:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.429 13:17:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.429 13:17:13 -- scripts/common.sh@368 -- # return 0 00:06:32.429 13:17:13 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.429 13:17:13 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.429 --rc genhtml_branch_coverage=1 00:06:32.429 --rc genhtml_function_coverage=1 00:06:32.429 --rc genhtml_legend=1 00:06:32.429 --rc geninfo_all_blocks=1 00:06:32.429 --rc geninfo_unexecuted_blocks=1 00:06:32.429 00:06:32.429 ' 00:06:32.429 13:17:13 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.429 --rc genhtml_branch_coverage=1 00:06:32.429 --rc genhtml_function_coverage=1 00:06:32.429 --rc genhtml_legend=1 00:06:32.429 --rc geninfo_all_blocks=1 00:06:32.429 --rc geninfo_unexecuted_blocks=1 00:06:32.429 00:06:32.429 ' 00:06:32.429 13:17:13 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.429 --rc genhtml_branch_coverage=1 00:06:32.429 --rc genhtml_function_coverage=1 00:06:32.429 --rc genhtml_legend=1 00:06:32.429 --rc geninfo_all_blocks=1 00:06:32.429 --rc geninfo_unexecuted_blocks=1 00:06:32.429 00:06:32.429 ' 00:06:32.429 13:17:13 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:32.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.429 --rc genhtml_branch_coverage=1 00:06:32.429 --rc genhtml_function_coverage=1 00:06:32.429 --rc genhtml_legend=1 00:06:32.429 --rc geninfo_all_blocks=1 00:06:32.429 --rc geninfo_unexecuted_blocks=1 00:06:32.429 00:06:32.429 ' 00:06:32.429 13:17:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:32.429 13:17:13 -- nvmf/common.sh@7 -- # uname -s 00:06:32.429 13:17:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:32.429 13:17:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:32.429 13:17:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:32.429 13:17:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:32.429 13:17:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:32.429 13:17:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:32.429 13:17:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:32.429 13:17:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:32.429 13:17:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:32.429 13:17:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:32.429 13:17:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:06:32.429 13:17:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:06:32.429 13:17:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:32.429 13:17:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:32.429 13:17:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:32.429 13:17:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:32.429 13:17:13 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:32.429 13:17:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:32.429 13:17:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:32.429 13:17:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:32.429 13:17:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:32.430 13:17:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.430 13:17:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.430 13:17:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.430 13:17:13 -- paths/export.sh@5 -- # export PATH 00:06:32.430 13:17:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:32.430 13:17:13 -- nvmf/common.sh@51 -- # : 0 00:06:32.430 13:17:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:32.430 13:17:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:32.430 13:17:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:32.430 13:17:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:32.430 13:17:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:32.430 13:17:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:32.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:32.430 13:17:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:32.430 13:17:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:32.430 13:17:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:32.430 13:17:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:32.430 13:17:13 -- spdk/autotest.sh@32 -- # uname -s 00:06:32.430 13:17:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:32.430 13:17:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:32.430 13:17:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:32.430 13:17:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:32.430 13:17:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:32.430 13:17:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:32.430 13:17:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:32.430 13:17:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:32.430 13:17:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1671229 00:06:32.430 13:17:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:32.430 13:17:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:32.430 13:17:13 -- pm/common@17 -- # local monitor 00:06:32.430 13:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.430 13:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.430 13:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.430 13:17:13 -- pm/common@21 -- # date +%s 00:06:32.430 13:17:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:32.430 13:17:13 -- pm/common@21 -- # date +%s 00:06:32.430 13:17:13 -- pm/common@25 -- # sleep 1 00:06:32.430 13:17:13 -- pm/common@21 -- # date +%s 00:06:32.430 13:17:13 -- pm/common@21 -- # date +%s 00:06:32.430 13:17:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728299833 00:06:32.430 13:17:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728299833 00:06:32.430 13:17:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728299833 00:06:32.430 13:17:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1728299833 00:06:32.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728299833_collect-cpu-load.pm.log 00:06:32.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728299833_collect-vmstat.pm.log 00:06:32.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728299833_collect-cpu-temp.pm.log 00:06:32.430 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1728299833_collect-bmc-pm.bmc.pm.log 00:06:33.369 13:17:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:33.369 13:17:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:33.369 13:17:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:33.369 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:33.369 13:17:14 -- spdk/autotest.sh@59 -- # create_test_list 00:06:33.369 13:17:14 -- common/autotest_common.sh@748 -- # xtrace_disable 00:06:33.369 13:17:14 -- common/autotest_common.sh@10 -- # set +x 00:06:33.369 13:17:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:33.369 13:17:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.369 13:17:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.369 13:17:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:33.369 13:17:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:33.369 13:17:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:33.369 13:17:15 -- common/autotest_common.sh@1455 -- # uname 00:06:33.369 13:17:15 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:33.369 13:17:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:33.369 13:17:15 -- common/autotest_common.sh@1475 -- # uname 00:06:33.369 13:17:15 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:33.369 13:17:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:33.369 13:17:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:33.627 lcov: LCOV version 1.15 00:06:33.627 13:17:15 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:06:51.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:06:51.705 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:09.802 13:17:50 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:09.802 13:17:50 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.802 13:17:50 -- common/autotest_common.sh@10 -- # set +x 00:07:09.802 13:17:50 -- spdk/autotest.sh@78 -- # rm -f 00:07:09.802 13:17:50 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:10.370 0000:84:00.0 (8086 0a54): Already using the nvme driver 00:07:10.370 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:07:10.370 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:07:10.370 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:07:10.370 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:07:10.629 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:07:10.629 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:07:10.629 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:07:10.629 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:07:10.629 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:07:10.629 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:07:10.629 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:07:10.629 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:07:10.629 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:07:10.629 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:07:10.629 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:07:10.629 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:07:10.889 13:17:52 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:10.889 13:17:52 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:10.889 13:17:52 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:10.889 13:17:52 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:10.889 13:17:52 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.889 13:17:52 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:10.889 13:17:52 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:10.889 13:17:52 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:10.889 13:17:52 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.889 13:17:52 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:10.889 13:17:52 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:10.889 13:17:52 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:10.889 13:17:52 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:10.889 13:17:52 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:10.889 13:17:52 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:10.889 No valid GPT data, bailing 00:07:10.889 13:17:52 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:10.889 13:17:52 -- scripts/common.sh@394 -- # pt= 00:07:10.889 13:17:52 -- scripts/common.sh@395 -- # return 1 00:07:10.889 13:17:52 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:10.889 1+0 records in 00:07:10.889 1+0 records out 00:07:10.889 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00157043 s, 668 MB/s 00:07:10.889 13:17:52 -- spdk/autotest.sh@105 -- # sync 00:07:10.889 13:17:52 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:10.889 13:17:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:10.889 13:17:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:12.793 13:17:54 -- spdk/autotest.sh@111 -- # uname -s 00:07:12.793 13:17:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:12.793 13:17:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:12.793 13:17:54 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:14.169 Hugepages 00:07:14.169 node hugesize free / total 00:07:14.169 node0 1048576kB 0 / 0 00:07:14.169 node0 2048kB 0 / 0 00:07:14.169 node1 1048576kB 0 / 0 00:07:14.169 node1 2048kB 0 / 0 00:07:14.169 00:07:14.169 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:14.169 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:14.169 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:14.169 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:14.169 NVMe 0000:84:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:07:14.169 13:17:55 -- spdk/autotest.sh@117 -- # uname -s 00:07:14.169 13:17:55 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:14.169 13:17:55 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:14.169 13:17:55 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:15.550 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:15.550 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:15.550 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:16.488 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:07:16.488 13:17:58 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:17.426 13:17:59 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:17.426 13:17:59 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:17.426 13:17:59 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:17.426 13:17:59 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:17.426 13:17:59 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:17.426 13:17:59 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:17.426 13:17:59 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:17.426 13:17:59 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:17.426 13:17:59 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:17.426 13:17:59 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:17.426 13:17:59 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:84:00.0 00:07:17.426 13:17:59 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:18.803 Waiting for block devices as requested 00:07:18.803 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:07:18.803 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:18.803 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:19.064 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:19.064 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:19.064 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:19.064 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:19.064 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:19.323 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:19.323 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:19.323 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:19.580 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:19.580 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:19.580 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:19.580 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:19.841 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:19.841 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:19.841 13:18:01 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:19.841 13:18:01 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:84:00.0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1485 -- # grep 0000:84:00.0/nvme/nvme 00:07:19.841 13:18:01 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 ]] 00:07:19.841 13:18:01 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:84:00.0/nvme/nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:19.841 13:18:01 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:19.841 13:18:01 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:19.841 13:18:01 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:20.098 13:18:01 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:07:20.098 13:18:01 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:20.098 13:18:01 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:20.098 13:18:01 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:20.098 13:18:01 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:20.098 13:18:01 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:20.098 13:18:01 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:20.098 13:18:01 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:20.098 13:18:01 -- common/autotest_common.sh@1541 -- # continue 00:07:20.098 13:18:01 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:20.098 13:18:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:20.098 13:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:20.098 13:18:01 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:20.098 13:18:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:20.098 13:18:01 -- common/autotest_common.sh@10 -- # set +x 00:07:20.098 13:18:01 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:21.475 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:21.475 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:21.475 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:22.043 0000:84:00.0 (8086 0a54): nvme -> vfio-pci 00:07:22.303 13:18:03 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:22.303 13:18:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:22.303 13:18:03 -- common/autotest_common.sh@10 -- # set +x 00:07:22.303 13:18:03 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:22.303 13:18:03 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:22.303 13:18:03 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:22.303 13:18:03 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:22.303 13:18:03 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:22.303 13:18:03 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:22.303 13:18:03 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:22.303 13:18:03 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:22.303 13:18:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:22.303 13:18:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:22.303 13:18:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:22.303 13:18:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:22.303 13:18:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:22.303 13:18:04 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:22.303 13:18:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:84:00.0 00:07:22.303 13:18:04 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:22.303 13:18:04 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:84:00.0/device 00:07:22.303 13:18:04 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:07:22.303 13:18:04 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:22.303 13:18:04 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:07:22.303 13:18:04 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:07:22.303 13:18:04 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:84:00.0 00:07:22.303 13:18:04 -- common/autotest_common.sh@1577 -- # [[ -z 0000:84:00.0 ]] 00:07:22.303 13:18:04 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1681288 00:07:22.303 13:18:04 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:22.303 13:18:04 -- common/autotest_common.sh@1583 -- # waitforlisten 1681288 00:07:22.303 13:18:04 -- common/autotest_common.sh@831 -- # '[' -z 1681288 ']' 00:07:22.303 13:18:04 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.303 13:18:04 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:22.303 13:18:04 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.303 13:18:04 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:22.303 13:18:04 -- common/autotest_common.sh@10 -- # set +x 00:07:22.563 [2024-10-07 13:18:04.069910] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:22.563 [2024-10-07 13:18:04.070010] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1681288 ] 00:07:22.563 [2024-10-07 13:18:04.125756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.563 [2024-10-07 13:18:04.236609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.820 13:18:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.820 13:18:04 -- common/autotest_common.sh@864 -- # return 0 00:07:22.820 13:18:04 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:22.820 13:18:04 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:22.820 13:18:04 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:84:00.0 00:07:26.148 nvme0n1 00:07:26.148 13:18:07 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:26.148 [2024-10-07 13:18:07.850037] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:07:26.148 [2024-10-07 13:18:07.850088] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:07:26.148 request: 00:07:26.148 { 00:07:26.148 "nvme_ctrlr_name": "nvme0", 00:07:26.148 "password": "test", 00:07:26.148 "method": "bdev_nvme_opal_revert", 00:07:26.149 "req_id": 1 00:07:26.149 } 00:07:26.149 Got JSON-RPC error response 00:07:26.149 response: 00:07:26.149 { 00:07:26.149 "code": -32603, 00:07:26.149 "message": "Internal error" 00:07:26.149 } 00:07:26.407 13:18:07 -- common/autotest_common.sh@1589 -- # true 00:07:26.407 13:18:07 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:26.407 13:18:07 -- common/autotest_common.sh@1593 -- # killprocess 1681288 00:07:26.407 13:18:07 -- common/autotest_common.sh@950 -- # '[' -z 1681288 ']' 00:07:26.407 13:18:07 -- common/autotest_common.sh@954 -- # kill -0 1681288 00:07:26.407 13:18:07 -- common/autotest_common.sh@955 -- # uname 00:07:26.407 13:18:07 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.407 13:18:07 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1681288 00:07:26.407 13:18:07 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.407 13:18:07 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.407 13:18:07 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1681288' 00:07:26.407 killing process with pid 1681288 00:07:26.407 13:18:07 -- common/autotest_common.sh@969 -- # kill 1681288 00:07:26.407 13:18:07 -- common/autotest_common.sh@974 -- # wait 1681288 00:07:28.304 13:18:09 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:28.305 13:18:09 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:28.305 13:18:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:28.305 13:18:09 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:28.305 13:18:09 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:28.305 13:18:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:28.305 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 13:18:09 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:28.305 13:18:09 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:28.305 13:18:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.305 13:18:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.305 13:18:09 -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 ************************************ 00:07:28.305 START TEST env 00:07:28.305 ************************************ 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:28.305 * Looking for test storage... 00:07:28.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1681 -- # lcov --version 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:28.305 13:18:09 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:28.305 13:18:09 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:28.305 13:18:09 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:28.305 13:18:09 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:28.305 13:18:09 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:28.305 13:18:09 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:28.305 13:18:09 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:28.305 13:18:09 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:28.305 13:18:09 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:28.305 13:18:09 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:28.305 13:18:09 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:28.305 13:18:09 env -- scripts/common.sh@344 -- # case "$op" in 00:07:28.305 13:18:09 env -- scripts/common.sh@345 -- # : 1 00:07:28.305 13:18:09 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:28.305 13:18:09 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:28.305 13:18:09 env -- scripts/common.sh@365 -- # decimal 1 00:07:28.305 13:18:09 env -- scripts/common.sh@353 -- # local d=1 00:07:28.305 13:18:09 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:28.305 13:18:09 env -- scripts/common.sh@355 -- # echo 1 00:07:28.305 13:18:09 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:28.305 13:18:09 env -- scripts/common.sh@366 -- # decimal 2 00:07:28.305 13:18:09 env -- scripts/common.sh@353 -- # local d=2 00:07:28.305 13:18:09 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:28.305 13:18:09 env -- scripts/common.sh@355 -- # echo 2 00:07:28.305 13:18:09 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:28.305 13:18:09 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:28.305 13:18:09 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:28.305 13:18:09 env -- scripts/common.sh@368 -- # return 0 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.305 --rc genhtml_branch_coverage=1 00:07:28.305 --rc genhtml_function_coverage=1 00:07:28.305 --rc genhtml_legend=1 00:07:28.305 --rc geninfo_all_blocks=1 00:07:28.305 --rc geninfo_unexecuted_blocks=1 00:07:28.305 00:07:28.305 ' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.305 --rc genhtml_branch_coverage=1 00:07:28.305 --rc genhtml_function_coverage=1 00:07:28.305 --rc genhtml_legend=1 00:07:28.305 --rc geninfo_all_blocks=1 00:07:28.305 --rc geninfo_unexecuted_blocks=1 00:07:28.305 00:07:28.305 ' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.305 --rc genhtml_branch_coverage=1 00:07:28.305 --rc genhtml_function_coverage=1 00:07:28.305 --rc genhtml_legend=1 00:07:28.305 --rc geninfo_all_blocks=1 00:07:28.305 --rc geninfo_unexecuted_blocks=1 00:07:28.305 00:07:28.305 ' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:28.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:28.305 --rc genhtml_branch_coverage=1 00:07:28.305 --rc genhtml_function_coverage=1 00:07:28.305 --rc genhtml_legend=1 00:07:28.305 --rc geninfo_all_blocks=1 00:07:28.305 --rc geninfo_unexecuted_blocks=1 00:07:28.305 00:07:28.305 ' 00:07:28.305 13:18:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.305 13:18:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.305 13:18:09 env -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 ************************************ 00:07:28.305 START TEST env_memory 00:07:28.305 ************************************ 00:07:28.305 13:18:09 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:28.305 00:07:28.305 00:07:28.305 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.305 http://cunit.sourceforge.net/ 00:07:28.305 00:07:28.305 00:07:28.305 Suite: memory 00:07:28.305 Test: alloc and free memory map ...[2024-10-07 13:18:09.892740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:28.305 passed 00:07:28.305 Test: mem map translation ...[2024-10-07 13:18:09.912687] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:28.305 [2024-10-07 13:18:09.912709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:28.305 [2024-10-07 13:18:09.912755] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:28.305 [2024-10-07 13:18:09.912767] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:28.305 passed 00:07:28.305 Test: mem map registration ...[2024-10-07 13:18:09.953617] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:28.305 [2024-10-07 13:18:09.953636] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:28.305 passed 00:07:28.305 Test: mem map adjacent registrations ...passed 00:07:28.305 00:07:28.305 Run Summary: Type Total Ran Passed Failed Inactive 00:07:28.305 suites 1 1 n/a 0 0 00:07:28.305 tests 4 4 4 0 0 00:07:28.305 asserts 152 152 152 0 n/a 00:07:28.305 00:07:28.305 Elapsed time = 0.143 seconds 00:07:28.305 00:07:28.305 real 0m0.152s 00:07:28.305 user 0m0.145s 00:07:28.305 sys 0m0.006s 00:07:28.305 13:18:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.305 13:18:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:28.305 ************************************ 00:07:28.305 END TEST env_memory 00:07:28.305 ************************************ 00:07:28.564 13:18:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:28.564 13:18:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:28.564 13:18:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.564 13:18:10 env -- common/autotest_common.sh@10 -- # set +x 00:07:28.564 ************************************ 00:07:28.564 START TEST env_vtophys 00:07:28.564 ************************************ 00:07:28.564 13:18:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:28.564 EAL: lib.eal log level changed from notice to debug 00:07:28.564 EAL: Detected lcore 0 as core 0 on socket 0 00:07:28.564 EAL: Detected lcore 1 as core 1 on socket 0 00:07:28.564 EAL: Detected lcore 2 as core 2 on socket 0 00:07:28.564 EAL: Detected lcore 3 as core 3 on socket 0 00:07:28.564 EAL: Detected lcore 4 as core 4 on socket 0 00:07:28.564 EAL: Detected lcore 5 as core 5 on socket 0 00:07:28.564 EAL: Detected lcore 6 as core 8 on socket 0 00:07:28.564 EAL: Detected lcore 7 as core 9 on socket 0 00:07:28.564 EAL: Detected lcore 8 as core 10 on socket 0 00:07:28.564 EAL: Detected lcore 9 as core 11 on socket 0 00:07:28.564 EAL: Detected lcore 10 as core 12 on socket 0 00:07:28.564 EAL: Detected lcore 11 as core 13 on socket 0 00:07:28.564 EAL: Detected lcore 12 as core 0 on socket 1 00:07:28.564 EAL: Detected lcore 13 as core 1 on socket 1 00:07:28.564 EAL: Detected lcore 14 as core 2 on socket 1 00:07:28.564 EAL: Detected lcore 15 as core 3 on socket 1 00:07:28.564 EAL: Detected lcore 16 as core 4 on socket 1 00:07:28.564 EAL: Detected lcore 17 as core 5 on socket 1 00:07:28.564 EAL: Detected lcore 18 as core 8 on socket 1 00:07:28.564 EAL: Detected lcore 19 as core 9 on socket 1 00:07:28.564 EAL: Detected lcore 20 as core 10 on socket 1 00:07:28.564 EAL: Detected lcore 21 as core 11 on socket 1 00:07:28.564 EAL: Detected lcore 22 as core 12 on socket 1 00:07:28.564 EAL: Detected lcore 23 as core 13 on socket 1 00:07:28.564 EAL: Detected lcore 24 as core 0 on socket 0 00:07:28.564 EAL: Detected lcore 25 as core 1 on socket 0 00:07:28.564 EAL: Detected lcore 26 as core 2 on socket 0 00:07:28.564 EAL: Detected lcore 27 as core 3 on socket 0 00:07:28.564 EAL: Detected lcore 28 as core 4 on socket 0 00:07:28.564 EAL: Detected lcore 29 as core 5 on socket 0 00:07:28.564 EAL: Detected lcore 30 as core 8 on socket 0 00:07:28.564 EAL: Detected lcore 31 as core 9 on socket 0 00:07:28.564 EAL: Detected lcore 32 as core 10 on socket 0 00:07:28.564 EAL: Detected lcore 33 as core 11 on socket 0 00:07:28.564 EAL: Detected lcore 34 as core 12 on socket 0 00:07:28.564 EAL: Detected lcore 35 as core 13 on socket 0 00:07:28.564 EAL: Detected lcore 36 as core 0 on socket 1 00:07:28.564 EAL: Detected lcore 37 as core 1 on socket 1 00:07:28.564 EAL: Detected lcore 38 as core 2 on socket 1 00:07:28.564 EAL: Detected lcore 39 as core 3 on socket 1 00:07:28.564 EAL: Detected lcore 40 as core 4 on socket 1 00:07:28.564 EAL: Detected lcore 41 as core 5 on socket 1 00:07:28.564 EAL: Detected lcore 42 as core 8 on socket 1 00:07:28.564 EAL: Detected lcore 43 as core 9 on socket 1 00:07:28.564 EAL: Detected lcore 44 as core 10 on socket 1 00:07:28.564 EAL: Detected lcore 45 as core 11 on socket 1 00:07:28.564 EAL: Detected lcore 46 as core 12 on socket 1 00:07:28.564 EAL: Detected lcore 47 as core 13 on socket 1 00:07:28.564 EAL: Maximum logical cores by configuration: 128 00:07:28.564 EAL: Detected CPU lcores: 48 00:07:28.564 EAL: Detected NUMA nodes: 2 00:07:28.564 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:28.564 EAL: Detected shared linkage of DPDK 00:07:28.564 EAL: No shared files mode enabled, IPC will be disabled 00:07:28.564 EAL: Bus pci wants IOVA as 'DC' 00:07:28.564 EAL: Buses did not request a specific IOVA mode. 00:07:28.564 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:28.564 EAL: Selected IOVA mode 'VA' 00:07:28.564 EAL: Probing VFIO support... 00:07:28.564 EAL: IOMMU type 1 (Type 1) is supported 00:07:28.564 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:28.564 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:28.564 EAL: VFIO support initialized 00:07:28.564 EAL: Ask a virtual area of 0x2e000 bytes 00:07:28.564 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:28.564 EAL: Setting up physically contiguous memory... 00:07:28.564 EAL: Setting maximum number of open files to 524288 00:07:28.564 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:28.564 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:28.564 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:28.564 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:28.564 EAL: Ask a virtual area of 0x61000 bytes 00:07:28.564 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:28.564 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:28.564 EAL: Ask a virtual area of 0x400000000 bytes 00:07:28.564 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:28.564 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:28.564 EAL: Hugepages will be freed exactly as allocated. 00:07:28.564 EAL: No shared files mode enabled, IPC is disabled 00:07:28.564 EAL: No shared files mode enabled, IPC is disabled 00:07:28.564 EAL: TSC frequency is ~2700000 KHz 00:07:28.564 EAL: Main lcore 0 is ready (tid=7f0f8fbc1a00;cpuset=[0]) 00:07:28.564 EAL: Trying to obtain current memory policy. 00:07:28.564 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.564 EAL: Restoring previous memory policy: 0 00:07:28.564 EAL: request: mp_malloc_sync 00:07:28.564 EAL: No shared files mode enabled, IPC is disabled 00:07:28.564 EAL: Heap on socket 0 was expanded by 2MB 00:07:28.564 EAL: No shared files mode enabled, IPC is disabled 00:07:28.564 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:28.565 EAL: Mem event callback 'spdk:(nil)' registered 00:07:28.565 00:07:28.565 00:07:28.565 CUnit - A unit testing framework for C - Version 2.1-3 00:07:28.565 http://cunit.sourceforge.net/ 00:07:28.565 00:07:28.565 00:07:28.565 Suite: components_suite 00:07:28.565 Test: vtophys_malloc_test ...passed 00:07:28.565 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 4MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 4MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 6MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 6MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 10MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 10MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 18MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 18MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 34MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 34MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 66MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was shrunk by 66MB 00:07:28.565 EAL: Trying to obtain current memory policy. 00:07:28.565 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.565 EAL: Restoring previous memory policy: 4 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.565 EAL: request: mp_malloc_sync 00:07:28.565 EAL: No shared files mode enabled, IPC is disabled 00:07:28.565 EAL: Heap on socket 0 was expanded by 130MB 00:07:28.565 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.823 EAL: request: mp_malloc_sync 00:07:28.823 EAL: No shared files mode enabled, IPC is disabled 00:07:28.823 EAL: Heap on socket 0 was shrunk by 130MB 00:07:28.823 EAL: Trying to obtain current memory policy. 00:07:28.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:28.823 EAL: Restoring previous memory policy: 4 00:07:28.823 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.823 EAL: request: mp_malloc_sync 00:07:28.823 EAL: No shared files mode enabled, IPC is disabled 00:07:28.823 EAL: Heap on socket 0 was expanded by 258MB 00:07:28.823 EAL: Calling mem event callback 'spdk:(nil)' 00:07:28.823 EAL: request: mp_malloc_sync 00:07:28.823 EAL: No shared files mode enabled, IPC is disabled 00:07:28.823 EAL: Heap on socket 0 was shrunk by 258MB 00:07:28.823 EAL: Trying to obtain current memory policy. 00:07:28.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.082 EAL: Restoring previous memory policy: 4 00:07:29.082 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.082 EAL: request: mp_malloc_sync 00:07:29.082 EAL: No shared files mode enabled, IPC is disabled 00:07:29.082 EAL: Heap on socket 0 was expanded by 514MB 00:07:29.082 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.082 EAL: request: mp_malloc_sync 00:07:29.082 EAL: No shared files mode enabled, IPC is disabled 00:07:29.082 EAL: Heap on socket 0 was shrunk by 514MB 00:07:29.082 EAL: Trying to obtain current memory policy. 00:07:29.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:29.340 EAL: Restoring previous memory policy: 4 00:07:29.340 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.340 EAL: request: mp_malloc_sync 00:07:29.340 EAL: No shared files mode enabled, IPC is disabled 00:07:29.340 EAL: Heap on socket 0 was expanded by 1026MB 00:07:29.597 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.855 EAL: request: mp_malloc_sync 00:07:29.855 EAL: No shared files mode enabled, IPC is disabled 00:07:29.855 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:29.855 passed 00:07:29.855 00:07:29.855 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.855 suites 1 1 n/a 0 0 00:07:29.855 tests 2 2 2 0 0 00:07:29.855 asserts 497 497 497 0 n/a 00:07:29.855 00:07:29.855 Elapsed time = 1.301 seconds 00:07:29.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:29.855 EAL: request: mp_malloc_sync 00:07:29.855 EAL: No shared files mode enabled, IPC is disabled 00:07:29.855 EAL: Heap on socket 0 was shrunk by 2MB 00:07:29.855 EAL: No shared files mode enabled, IPC is disabled 00:07:29.855 EAL: No shared files mode enabled, IPC is disabled 00:07:29.855 EAL: No shared files mode enabled, IPC is disabled 00:07:29.855 00:07:29.855 real 0m1.412s 00:07:29.855 user 0m0.814s 00:07:29.855 sys 0m0.565s 00:07:29.855 13:18:11 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.855 13:18:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:29.855 ************************************ 00:07:29.855 END TEST env_vtophys 00:07:29.855 ************************************ 00:07:29.855 13:18:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:29.855 13:18:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.855 13:18:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.855 13:18:11 env -- common/autotest_common.sh@10 -- # set +x 00:07:29.855 ************************************ 00:07:29.855 START TEST env_pci 00:07:29.855 ************************************ 00:07:29.855 13:18:11 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:29.855 00:07:29.855 00:07:29.855 CUnit - A unit testing framework for C - Version 2.1-3 00:07:29.855 http://cunit.sourceforge.net/ 00:07:29.855 00:07:29.855 00:07:29.855 Suite: pci 00:07:29.855 Test: pci_hook ...[2024-10-07 13:18:11.537441] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1682661 has claimed it 00:07:29.855 EAL: Cannot find device (10000:00:01.0) 00:07:29.855 EAL: Failed to attach device on primary process 00:07:29.855 passed 00:07:29.855 00:07:29.855 Run Summary: Type Total Ran Passed Failed Inactive 00:07:29.855 suites 1 1 n/a 0 0 00:07:29.855 tests 1 1 1 0 0 00:07:29.855 asserts 25 25 25 0 n/a 00:07:29.855 00:07:29.855 Elapsed time = 0.020 seconds 00:07:29.855 00:07:29.855 real 0m0.032s 00:07:29.856 user 0m0.011s 00:07:29.856 sys 0m0.021s 00:07:29.856 13:18:11 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.856 13:18:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:29.856 ************************************ 00:07:29.856 END TEST env_pci 00:07:29.856 ************************************ 00:07:30.115 13:18:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:30.115 13:18:11 env -- env/env.sh@15 -- # uname 00:07:30.115 13:18:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:30.115 13:18:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:30.115 13:18:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:30.115 13:18:11 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:30.115 13:18:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.115 13:18:11 env -- common/autotest_common.sh@10 -- # set +x 00:07:30.115 ************************************ 00:07:30.115 START TEST env_dpdk_post_init 00:07:30.115 ************************************ 00:07:30.115 13:18:11 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:30.115 EAL: Detected CPU lcores: 48 00:07:30.115 EAL: Detected NUMA nodes: 2 00:07:30.115 EAL: Detected shared linkage of DPDK 00:07:30.115 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:30.115 EAL: Selected IOVA mode 'VA' 00:07:30.116 EAL: VFIO support initialized 00:07:30.116 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:30.116 EAL: Using IOMMU type 1 (Type 1) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:30.116 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:30.375 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:30.946 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:84:00.0 (socket 1) 00:07:34.228 EAL: Releasing PCI mapped resource for 0000:84:00.0 00:07:34.228 EAL: Calling pci_unmap_resource for 0000:84:00.0 at 0x202001040000 00:07:34.228 Starting DPDK initialization... 00:07:34.228 Starting SPDK post initialization... 00:07:34.228 SPDK NVMe probe 00:07:34.228 Attaching to 0000:84:00.0 00:07:34.228 Attached to 0000:84:00.0 00:07:34.228 Cleaning up... 00:07:34.228 00:07:34.228 real 0m4.317s 00:07:34.228 user 0m2.961s 00:07:34.228 sys 0m0.417s 00:07:34.228 13:18:15 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.228 13:18:15 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:34.228 ************************************ 00:07:34.228 END TEST env_dpdk_post_init 00:07:34.228 ************************************ 00:07:34.487 13:18:15 env -- env/env.sh@26 -- # uname 00:07:34.487 13:18:15 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:34.487 13:18:15 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:34.487 13:18:15 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.487 13:18:15 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.487 13:18:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 ************************************ 00:07:34.487 START TEST env_mem_callbacks 00:07:34.487 ************************************ 00:07:34.487 13:18:15 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:34.487 EAL: Detected CPU lcores: 48 00:07:34.487 EAL: Detected NUMA nodes: 2 00:07:34.487 EAL: Detected shared linkage of DPDK 00:07:34.487 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:34.487 EAL: Selected IOVA mode 'VA' 00:07:34.487 EAL: VFIO support initialized 00:07:34.487 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:34.487 00:07:34.487 00:07:34.487 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.487 http://cunit.sourceforge.net/ 00:07:34.487 00:07:34.487 00:07:34.487 Suite: memory 00:07:34.487 Test: test ... 00:07:34.487 register 0x200000200000 2097152 00:07:34.487 malloc 3145728 00:07:34.487 register 0x200000400000 4194304 00:07:34.487 buf 0x200000500000 len 3145728 PASSED 00:07:34.487 malloc 64 00:07:34.487 buf 0x2000004fff40 len 64 PASSED 00:07:34.487 malloc 4194304 00:07:34.487 register 0x200000800000 6291456 00:07:34.487 buf 0x200000a00000 len 4194304 PASSED 00:07:34.487 free 0x200000500000 3145728 00:07:34.487 free 0x2000004fff40 64 00:07:34.487 unregister 0x200000400000 4194304 PASSED 00:07:34.487 free 0x200000a00000 4194304 00:07:34.487 unregister 0x200000800000 6291456 PASSED 00:07:34.487 malloc 8388608 00:07:34.487 register 0x200000400000 10485760 00:07:34.487 buf 0x200000600000 len 8388608 PASSED 00:07:34.487 free 0x200000600000 8388608 00:07:34.487 unregister 0x200000400000 10485760 PASSED 00:07:34.487 passed 00:07:34.487 00:07:34.487 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.487 suites 1 1 n/a 0 0 00:07:34.487 tests 1 1 1 0 0 00:07:34.487 asserts 15 15 15 0 n/a 00:07:34.487 00:07:34.487 Elapsed time = 0.005 seconds 00:07:34.487 00:07:34.487 real 0m0.049s 00:07:34.487 user 0m0.008s 00:07:34.487 sys 0m0.040s 00:07:34.487 13:18:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.487 13:18:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 ************************************ 00:07:34.487 END TEST env_mem_callbacks 00:07:34.487 ************************************ 00:07:34.487 00:07:34.487 real 0m6.371s 00:07:34.487 user 0m4.153s 00:07:34.487 sys 0m1.269s 00:07:34.487 13:18:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.487 13:18:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 ************************************ 00:07:34.487 END TEST env 00:07:34.487 ************************************ 00:07:34.487 13:18:16 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:34.487 13:18:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.487 13:18:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.487 13:18:16 -- common/autotest_common.sh@10 -- # set +x 00:07:34.487 ************************************ 00:07:34.487 START TEST rpc 00:07:34.487 ************************************ 00:07:34.487 13:18:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:34.487 * Looking for test storage... 00:07:34.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:34.487 13:18:16 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.487 13:18:16 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.487 13:18:16 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.746 13:18:16 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.746 13:18:16 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.746 13:18:16 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.746 13:18:16 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.746 13:18:16 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.746 13:18:16 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:34.746 13:18:16 rpc -- scripts/common.sh@345 -- # : 1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.746 13:18:16 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.746 13:18:16 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@353 -- # local d=1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.746 13:18:16 rpc -- scripts/common.sh@355 -- # echo 1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.746 13:18:16 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@353 -- # local d=2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.746 13:18:16 rpc -- scripts/common.sh@355 -- # echo 2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.746 13:18:16 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.746 13:18:16 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.746 13:18:16 rpc -- scripts/common.sh@368 -- # return 0 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:34.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.746 --rc genhtml_branch_coverage=1 00:07:34.746 --rc genhtml_function_coverage=1 00:07:34.746 --rc genhtml_legend=1 00:07:34.746 --rc geninfo_all_blocks=1 00:07:34.746 --rc geninfo_unexecuted_blocks=1 00:07:34.746 00:07:34.746 ' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:34.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.746 --rc genhtml_branch_coverage=1 00:07:34.746 --rc genhtml_function_coverage=1 00:07:34.746 --rc genhtml_legend=1 00:07:34.746 --rc geninfo_all_blocks=1 00:07:34.746 --rc geninfo_unexecuted_blocks=1 00:07:34.746 00:07:34.746 ' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:34.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.746 --rc genhtml_branch_coverage=1 00:07:34.746 --rc genhtml_function_coverage=1 00:07:34.746 --rc genhtml_legend=1 00:07:34.746 --rc geninfo_all_blocks=1 00:07:34.746 --rc geninfo_unexecuted_blocks=1 00:07:34.746 00:07:34.746 ' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:34.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.746 --rc genhtml_branch_coverage=1 00:07:34.746 --rc genhtml_function_coverage=1 00:07:34.746 --rc genhtml_legend=1 00:07:34.746 --rc geninfo_all_blocks=1 00:07:34.746 --rc geninfo_unexecuted_blocks=1 00:07:34.746 00:07:34.746 ' 00:07:34.746 13:18:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1683418 00:07:34.746 13:18:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:34.746 13:18:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:34.746 13:18:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1683418 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 1683418 ']' 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:34.746 13:18:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:34.746 [2024-10-07 13:18:16.290810] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:34.746 [2024-10-07 13:18:16.290889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683418 ] 00:07:34.746 [2024-10-07 13:18:16.344782] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.746 [2024-10-07 13:18:16.446930] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:34.746 [2024-10-07 13:18:16.447006] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1683418' to capture a snapshot of events at runtime. 00:07:34.746 [2024-10-07 13:18:16.447030] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.746 [2024-10-07 13:18:16.447040] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.746 [2024-10-07 13:18:16.447050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1683418 for offline analysis/debug. 00:07:34.747 [2024-10-07 13:18:16.447574] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.005 13:18:16 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:35.005 13:18:16 rpc -- common/autotest_common.sh@864 -- # return 0 00:07:35.005 13:18:16 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:35.005 13:18:16 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:35.005 13:18:16 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:35.005 13:18:16 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:35.005 13:18:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.005 13:18:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.005 13:18:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.263 ************************************ 00:07:35.263 START TEST rpc_integrity 00:07:35.263 ************************************ 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.263 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.263 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:35.263 { 00:07:35.263 "name": "Malloc0", 00:07:35.263 "aliases": [ 00:07:35.263 "6118c187-858c-421d-9934-b159cec9dc86" 00:07:35.263 ], 00:07:35.263 "product_name": "Malloc disk", 00:07:35.263 "block_size": 512, 00:07:35.263 "num_blocks": 16384, 00:07:35.263 "uuid": "6118c187-858c-421d-9934-b159cec9dc86", 00:07:35.263 "assigned_rate_limits": { 00:07:35.263 "rw_ios_per_sec": 0, 00:07:35.263 "rw_mbytes_per_sec": 0, 00:07:35.263 "r_mbytes_per_sec": 0, 00:07:35.264 "w_mbytes_per_sec": 0 00:07:35.264 }, 00:07:35.264 "claimed": false, 00:07:35.264 "zoned": false, 00:07:35.264 "supported_io_types": { 00:07:35.264 "read": true, 00:07:35.264 "write": true, 00:07:35.264 "unmap": true, 00:07:35.264 "flush": true, 00:07:35.264 "reset": true, 00:07:35.264 "nvme_admin": false, 00:07:35.264 "nvme_io": false, 00:07:35.264 "nvme_io_md": false, 00:07:35.264 "write_zeroes": true, 00:07:35.264 "zcopy": true, 00:07:35.264 "get_zone_info": false, 00:07:35.264 "zone_management": false, 00:07:35.264 "zone_append": false, 00:07:35.264 "compare": false, 00:07:35.264 "compare_and_write": false, 00:07:35.264 "abort": true, 00:07:35.264 "seek_hole": false, 00:07:35.264 "seek_data": false, 00:07:35.264 "copy": true, 00:07:35.264 "nvme_iov_md": false 00:07:35.264 }, 00:07:35.264 "memory_domains": [ 00:07:35.264 { 00:07:35.264 "dma_device_id": "system", 00:07:35.264 "dma_device_type": 1 00:07:35.264 }, 00:07:35.264 { 00:07:35.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.264 "dma_device_type": 2 00:07:35.264 } 00:07:35.264 ], 00:07:35.264 "driver_specific": {} 00:07:35.264 } 00:07:35.264 ]' 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 [2024-10-07 13:18:16.828009] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:35.264 [2024-10-07 13:18:16.828062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.264 [2024-10-07 13:18:16.828083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x539780 00:07:35.264 [2024-10-07 13:18:16.828096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.264 [2024-10-07 13:18:16.829373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.264 [2024-10-07 13:18:16.829396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:35.264 Passthru0 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:35.264 { 00:07:35.264 "name": "Malloc0", 00:07:35.264 "aliases": [ 00:07:35.264 "6118c187-858c-421d-9934-b159cec9dc86" 00:07:35.264 ], 00:07:35.264 "product_name": "Malloc disk", 00:07:35.264 "block_size": 512, 00:07:35.264 "num_blocks": 16384, 00:07:35.264 "uuid": "6118c187-858c-421d-9934-b159cec9dc86", 00:07:35.264 "assigned_rate_limits": { 00:07:35.264 "rw_ios_per_sec": 0, 00:07:35.264 "rw_mbytes_per_sec": 0, 00:07:35.264 "r_mbytes_per_sec": 0, 00:07:35.264 "w_mbytes_per_sec": 0 00:07:35.264 }, 00:07:35.264 "claimed": true, 00:07:35.264 "claim_type": "exclusive_write", 00:07:35.264 "zoned": false, 00:07:35.264 "supported_io_types": { 00:07:35.264 "read": true, 00:07:35.264 "write": true, 00:07:35.264 "unmap": true, 00:07:35.264 "flush": true, 00:07:35.264 "reset": true, 00:07:35.264 "nvme_admin": false, 00:07:35.264 "nvme_io": false, 00:07:35.264 "nvme_io_md": false, 00:07:35.264 "write_zeroes": true, 00:07:35.264 "zcopy": true, 00:07:35.264 "get_zone_info": false, 00:07:35.264 "zone_management": false, 00:07:35.264 "zone_append": false, 00:07:35.264 "compare": false, 00:07:35.264 "compare_and_write": false, 00:07:35.264 "abort": true, 00:07:35.264 "seek_hole": false, 00:07:35.264 "seek_data": false, 00:07:35.264 "copy": true, 00:07:35.264 "nvme_iov_md": false 00:07:35.264 }, 00:07:35.264 "memory_domains": [ 00:07:35.264 { 00:07:35.264 "dma_device_id": "system", 00:07:35.264 "dma_device_type": 1 00:07:35.264 }, 00:07:35.264 { 00:07:35.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.264 "dma_device_type": 2 00:07:35.264 } 00:07:35.264 ], 00:07:35.264 "driver_specific": {} 00:07:35.264 }, 00:07:35.264 { 00:07:35.264 "name": "Passthru0", 00:07:35.264 "aliases": [ 00:07:35.264 "287087e9-6e36-5dec-81c4-d8be82959732" 00:07:35.264 ], 00:07:35.264 "product_name": "passthru", 00:07:35.264 "block_size": 512, 00:07:35.264 "num_blocks": 16384, 00:07:35.264 "uuid": "287087e9-6e36-5dec-81c4-d8be82959732", 00:07:35.264 "assigned_rate_limits": { 00:07:35.264 "rw_ios_per_sec": 0, 00:07:35.264 "rw_mbytes_per_sec": 0, 00:07:35.264 "r_mbytes_per_sec": 0, 00:07:35.264 "w_mbytes_per_sec": 0 00:07:35.264 }, 00:07:35.264 "claimed": false, 00:07:35.264 "zoned": false, 00:07:35.264 "supported_io_types": { 00:07:35.264 "read": true, 00:07:35.264 "write": true, 00:07:35.264 "unmap": true, 00:07:35.264 "flush": true, 00:07:35.264 "reset": true, 00:07:35.264 "nvme_admin": false, 00:07:35.264 "nvme_io": false, 00:07:35.264 "nvme_io_md": false, 00:07:35.264 "write_zeroes": true, 00:07:35.264 "zcopy": true, 00:07:35.264 "get_zone_info": false, 00:07:35.264 "zone_management": false, 00:07:35.264 "zone_append": false, 00:07:35.264 "compare": false, 00:07:35.264 "compare_and_write": false, 00:07:35.264 "abort": true, 00:07:35.264 "seek_hole": false, 00:07:35.264 "seek_data": false, 00:07:35.264 "copy": true, 00:07:35.264 "nvme_iov_md": false 00:07:35.264 }, 00:07:35.264 "memory_domains": [ 00:07:35.264 { 00:07:35.264 "dma_device_id": "system", 00:07:35.264 "dma_device_type": 1 00:07:35.264 }, 00:07:35.264 { 00:07:35.264 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.264 "dma_device_type": 2 00:07:35.264 } 00:07:35.264 ], 00:07:35.264 "driver_specific": { 00:07:35.264 "passthru": { 00:07:35.264 "name": "Passthru0", 00:07:35.264 "base_bdev_name": "Malloc0" 00:07:35.264 } 00:07:35.264 } 00:07:35.264 } 00:07:35.264 ]' 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:35.264 13:18:16 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:35.264 00:07:35.264 real 0m0.211s 00:07:35.264 user 0m0.139s 00:07:35.264 sys 0m0.018s 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.264 13:18:16 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.264 ************************************ 00:07:35.264 END TEST rpc_integrity 00:07:35.264 ************************************ 00:07:35.264 13:18:16 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:35.264 13:18:16 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.264 13:18:16 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.264 13:18:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 ************************************ 00:07:35.523 START TEST rpc_plugins 00:07:35.523 ************************************ 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:07:35.523 13:18:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.523 13:18:16 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:35.523 13:18:16 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.523 13:18:16 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:35.523 { 00:07:35.523 "name": "Malloc1", 00:07:35.523 "aliases": [ 00:07:35.523 "6935c028-9860-4686-adb0-6c8dc1b36a4e" 00:07:35.523 ], 00:07:35.523 "product_name": "Malloc disk", 00:07:35.523 "block_size": 4096, 00:07:35.523 "num_blocks": 256, 00:07:35.523 "uuid": "6935c028-9860-4686-adb0-6c8dc1b36a4e", 00:07:35.523 "assigned_rate_limits": { 00:07:35.523 "rw_ios_per_sec": 0, 00:07:35.523 "rw_mbytes_per_sec": 0, 00:07:35.523 "r_mbytes_per_sec": 0, 00:07:35.523 "w_mbytes_per_sec": 0 00:07:35.523 }, 00:07:35.523 "claimed": false, 00:07:35.523 "zoned": false, 00:07:35.523 "supported_io_types": { 00:07:35.523 "read": true, 00:07:35.523 "write": true, 00:07:35.523 "unmap": true, 00:07:35.523 "flush": true, 00:07:35.523 "reset": true, 00:07:35.523 "nvme_admin": false, 00:07:35.523 "nvme_io": false, 00:07:35.523 "nvme_io_md": false, 00:07:35.523 "write_zeroes": true, 00:07:35.523 "zcopy": true, 00:07:35.523 "get_zone_info": false, 00:07:35.523 "zone_management": false, 00:07:35.523 "zone_append": false, 00:07:35.523 "compare": false, 00:07:35.523 "compare_and_write": false, 00:07:35.523 "abort": true, 00:07:35.523 "seek_hole": false, 00:07:35.523 "seek_data": false, 00:07:35.523 "copy": true, 00:07:35.523 "nvme_iov_md": false 00:07:35.523 }, 00:07:35.523 "memory_domains": [ 00:07:35.523 { 00:07:35.523 "dma_device_id": "system", 00:07:35.523 "dma_device_type": 1 00:07:35.523 }, 00:07:35.523 { 00:07:35.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.523 "dma_device_type": 2 00:07:35.523 } 00:07:35.523 ], 00:07:35.523 "driver_specific": {} 00:07:35.523 } 00:07:35.523 ]' 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:35.523 13:18:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:35.523 00:07:35.523 real 0m0.104s 00:07:35.523 user 0m0.067s 00:07:35.523 sys 0m0.009s 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.523 13:18:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 ************************************ 00:07:35.523 END TEST rpc_plugins 00:07:35.523 ************************************ 00:07:35.523 13:18:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:35.523 13:18:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.523 13:18:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.523 13:18:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 ************************************ 00:07:35.523 START TEST rpc_trace_cmd_test 00:07:35.523 ************************************ 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:35.523 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1683418", 00:07:35.523 "tpoint_group_mask": "0x8", 00:07:35.523 "iscsi_conn": { 00:07:35.523 "mask": "0x2", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "scsi": { 00:07:35.523 "mask": "0x4", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "bdev": { 00:07:35.523 "mask": "0x8", 00:07:35.523 "tpoint_mask": "0xffffffffffffffff" 00:07:35.523 }, 00:07:35.523 "nvmf_rdma": { 00:07:35.523 "mask": "0x10", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "nvmf_tcp": { 00:07:35.523 "mask": "0x20", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "ftl": { 00:07:35.523 "mask": "0x40", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "blobfs": { 00:07:35.523 "mask": "0x80", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "dsa": { 00:07:35.523 "mask": "0x200", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "thread": { 00:07:35.523 "mask": "0x400", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "nvme_pcie": { 00:07:35.523 "mask": "0x800", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "iaa": { 00:07:35.523 "mask": "0x1000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "nvme_tcp": { 00:07:35.523 "mask": "0x2000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "bdev_nvme": { 00:07:35.523 "mask": "0x4000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "sock": { 00:07:35.523 "mask": "0x8000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "blob": { 00:07:35.523 "mask": "0x10000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "bdev_raid": { 00:07:35.523 "mask": "0x20000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 }, 00:07:35.523 "scheduler": { 00:07:35.523 "mask": "0x40000", 00:07:35.523 "tpoint_mask": "0x0" 00:07:35.523 } 00:07:35.523 }' 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:35.523 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:35.782 00:07:35.782 real 0m0.190s 00:07:35.782 user 0m0.162s 00:07:35.782 sys 0m0.018s 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 ************************************ 00:07:35.782 END TEST rpc_trace_cmd_test 00:07:35.782 ************************************ 00:07:35.782 13:18:17 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:35.782 13:18:17 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:35.782 13:18:17 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:35.782 13:18:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:35.782 13:18:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.782 13:18:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 ************************************ 00:07:35.782 START TEST rpc_daemon_integrity 00:07:35.782 ************************************ 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:35.782 { 00:07:35.782 "name": "Malloc2", 00:07:35.782 "aliases": [ 00:07:35.782 "e572ac4f-690d-4a98-8d6a-8d9dba2c49fb" 00:07:35.782 ], 00:07:35.782 "product_name": "Malloc disk", 00:07:35.782 "block_size": 512, 00:07:35.782 "num_blocks": 16384, 00:07:35.782 "uuid": "e572ac4f-690d-4a98-8d6a-8d9dba2c49fb", 00:07:35.782 "assigned_rate_limits": { 00:07:35.782 "rw_ios_per_sec": 0, 00:07:35.782 "rw_mbytes_per_sec": 0, 00:07:35.782 "r_mbytes_per_sec": 0, 00:07:35.782 "w_mbytes_per_sec": 0 00:07:35.782 }, 00:07:35.782 "claimed": false, 00:07:35.782 "zoned": false, 00:07:35.782 "supported_io_types": { 00:07:35.782 "read": true, 00:07:35.782 "write": true, 00:07:35.782 "unmap": true, 00:07:35.782 "flush": true, 00:07:35.782 "reset": true, 00:07:35.782 "nvme_admin": false, 00:07:35.782 "nvme_io": false, 00:07:35.782 "nvme_io_md": false, 00:07:35.782 "write_zeroes": true, 00:07:35.782 "zcopy": true, 00:07:35.782 "get_zone_info": false, 00:07:35.782 "zone_management": false, 00:07:35.782 "zone_append": false, 00:07:35.782 "compare": false, 00:07:35.782 "compare_and_write": false, 00:07:35.782 "abort": true, 00:07:35.782 "seek_hole": false, 00:07:35.782 "seek_data": false, 00:07:35.782 "copy": true, 00:07:35.782 "nvme_iov_md": false 00:07:35.782 }, 00:07:35.782 "memory_domains": [ 00:07:35.782 { 00:07:35.782 "dma_device_id": "system", 00:07:35.782 "dma_device_type": 1 00:07:35.782 }, 00:07:35.782 { 00:07:35.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.782 "dma_device_type": 2 00:07:35.782 } 00:07:35.782 ], 00:07:35.782 "driver_specific": {} 00:07:35.782 } 00:07:35.782 ]' 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 [2024-10-07 13:18:17.470614] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:35.782 [2024-10-07 13:18:17.470673] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.782 [2024-10-07 13:18:17.470703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x539a00 00:07:35.782 [2024-10-07 13:18:17.470718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.782 [2024-10-07 13:18:17.471919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.782 [2024-10-07 13:18:17.471946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:35.782 Passthru0 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:35.782 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:35.782 { 00:07:35.782 "name": "Malloc2", 00:07:35.782 "aliases": [ 00:07:35.782 "e572ac4f-690d-4a98-8d6a-8d9dba2c49fb" 00:07:35.782 ], 00:07:35.782 "product_name": "Malloc disk", 00:07:35.782 "block_size": 512, 00:07:35.782 "num_blocks": 16384, 00:07:35.782 "uuid": "e572ac4f-690d-4a98-8d6a-8d9dba2c49fb", 00:07:35.782 "assigned_rate_limits": { 00:07:35.782 "rw_ios_per_sec": 0, 00:07:35.782 "rw_mbytes_per_sec": 0, 00:07:35.782 "r_mbytes_per_sec": 0, 00:07:35.782 "w_mbytes_per_sec": 0 00:07:35.782 }, 00:07:35.782 "claimed": true, 00:07:35.782 "claim_type": "exclusive_write", 00:07:35.782 "zoned": false, 00:07:35.782 "supported_io_types": { 00:07:35.782 "read": true, 00:07:35.782 "write": true, 00:07:35.782 "unmap": true, 00:07:35.782 "flush": true, 00:07:35.782 "reset": true, 00:07:35.782 "nvme_admin": false, 00:07:35.782 "nvme_io": false, 00:07:35.782 "nvme_io_md": false, 00:07:35.782 "write_zeroes": true, 00:07:35.782 "zcopy": true, 00:07:35.782 "get_zone_info": false, 00:07:35.782 "zone_management": false, 00:07:35.782 "zone_append": false, 00:07:35.782 "compare": false, 00:07:35.782 "compare_and_write": false, 00:07:35.782 "abort": true, 00:07:35.782 "seek_hole": false, 00:07:35.782 "seek_data": false, 00:07:35.782 "copy": true, 00:07:35.782 "nvme_iov_md": false 00:07:35.782 }, 00:07:35.782 "memory_domains": [ 00:07:35.782 { 00:07:35.782 "dma_device_id": "system", 00:07:35.782 "dma_device_type": 1 00:07:35.783 }, 00:07:35.783 { 00:07:35.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.783 "dma_device_type": 2 00:07:35.783 } 00:07:35.783 ], 00:07:35.783 "driver_specific": {} 00:07:35.783 }, 00:07:35.783 { 00:07:35.783 "name": "Passthru0", 00:07:35.783 "aliases": [ 00:07:35.783 "b2b37e49-1867-5437-b124-20fa95a60a5c" 00:07:35.783 ], 00:07:35.783 "product_name": "passthru", 00:07:35.783 "block_size": 512, 00:07:35.783 "num_blocks": 16384, 00:07:35.783 "uuid": "b2b37e49-1867-5437-b124-20fa95a60a5c", 00:07:35.783 "assigned_rate_limits": { 00:07:35.783 "rw_ios_per_sec": 0, 00:07:35.783 "rw_mbytes_per_sec": 0, 00:07:35.783 "r_mbytes_per_sec": 0, 00:07:35.783 "w_mbytes_per_sec": 0 00:07:35.783 }, 00:07:35.783 "claimed": false, 00:07:35.783 "zoned": false, 00:07:35.783 "supported_io_types": { 00:07:35.783 "read": true, 00:07:35.783 "write": true, 00:07:35.783 "unmap": true, 00:07:35.783 "flush": true, 00:07:35.783 "reset": true, 00:07:35.783 "nvme_admin": false, 00:07:35.783 "nvme_io": false, 00:07:35.783 "nvme_io_md": false, 00:07:35.783 "write_zeroes": true, 00:07:35.783 "zcopy": true, 00:07:35.783 "get_zone_info": false, 00:07:35.783 "zone_management": false, 00:07:35.783 "zone_append": false, 00:07:35.783 "compare": false, 00:07:35.783 "compare_and_write": false, 00:07:35.783 "abort": true, 00:07:35.783 "seek_hole": false, 00:07:35.783 "seek_data": false, 00:07:35.783 "copy": true, 00:07:35.783 "nvme_iov_md": false 00:07:35.783 }, 00:07:35.783 "memory_domains": [ 00:07:35.783 { 00:07:35.783 "dma_device_id": "system", 00:07:35.783 "dma_device_type": 1 00:07:35.783 }, 00:07:35.783 { 00:07:35.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.783 "dma_device_type": 2 00:07:35.783 } 00:07:35.783 ], 00:07:35.783 "driver_specific": { 00:07:35.783 "passthru": { 00:07:35.783 "name": "Passthru0", 00:07:35.783 "base_bdev_name": "Malloc2" 00:07:35.783 } 00:07:35.783 } 00:07:35.783 } 00:07:35.783 ]' 00:07:35.783 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:36.043 00:07:36.043 real 0m0.216s 00:07:36.043 user 0m0.139s 00:07:36.043 sys 0m0.022s 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.043 13:18:17 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:36.043 ************************************ 00:07:36.043 END TEST rpc_daemon_integrity 00:07:36.043 ************************************ 00:07:36.043 13:18:17 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:36.043 13:18:17 rpc -- rpc/rpc.sh@84 -- # killprocess 1683418 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@950 -- # '[' -z 1683418 ']' 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@954 -- # kill -0 1683418 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@955 -- # uname 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1683418 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1683418' 00:07:36.043 killing process with pid 1683418 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@969 -- # kill 1683418 00:07:36.043 13:18:17 rpc -- common/autotest_common.sh@974 -- # wait 1683418 00:07:36.613 00:07:36.613 real 0m2.009s 00:07:36.613 user 0m2.473s 00:07:36.613 sys 0m0.593s 00:07:36.613 13:18:18 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.613 13:18:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 ************************************ 00:07:36.613 END TEST rpc 00:07:36.613 ************************************ 00:07:36.613 13:18:18 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:36.613 13:18:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.613 13:18:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.613 13:18:18 -- common/autotest_common.sh@10 -- # set +x 00:07:36.613 ************************************ 00:07:36.613 START TEST skip_rpc 00:07:36.613 ************************************ 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:36.613 * Looking for test storage... 00:07:36.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.613 13:18:18 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:36.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.613 --rc genhtml_branch_coverage=1 00:07:36.613 --rc genhtml_function_coverage=1 00:07:36.613 --rc genhtml_legend=1 00:07:36.613 --rc geninfo_all_blocks=1 00:07:36.613 --rc geninfo_unexecuted_blocks=1 00:07:36.613 00:07:36.613 ' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:36.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.613 --rc genhtml_branch_coverage=1 00:07:36.613 --rc genhtml_function_coverage=1 00:07:36.613 --rc genhtml_legend=1 00:07:36.613 --rc geninfo_all_blocks=1 00:07:36.613 --rc geninfo_unexecuted_blocks=1 00:07:36.613 00:07:36.613 ' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:36.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.613 --rc genhtml_branch_coverage=1 00:07:36.613 --rc genhtml_function_coverage=1 00:07:36.613 --rc genhtml_legend=1 00:07:36.613 --rc geninfo_all_blocks=1 00:07:36.613 --rc geninfo_unexecuted_blocks=1 00:07:36.613 00:07:36.613 ' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:36.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.613 --rc genhtml_branch_coverage=1 00:07:36.613 --rc genhtml_function_coverage=1 00:07:36.613 --rc genhtml_legend=1 00:07:36.613 --rc geninfo_all_blocks=1 00:07:36.613 --rc geninfo_unexecuted_blocks=1 00:07:36.613 00:07:36.613 ' 00:07:36.613 13:18:18 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:36.613 13:18:18 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:36.613 13:18:18 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.613 13:18:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:36.872 ************************************ 00:07:36.872 START TEST skip_rpc 00:07:36.872 ************************************ 00:07:36.872 13:18:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:07:36.872 13:18:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1683808 00:07:36.872 13:18:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:36.872 13:18:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:36.872 13:18:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:36.872 [2024-10-07 13:18:18.392458] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:36.872 [2024-10-07 13:18:18.392532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1683808 ] 00:07:36.872 [2024-10-07 13:18:18.446483] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.872 [2024-10-07 13:18:18.550113] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1683808 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 1683808 ']' 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 1683808 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1683808 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1683808' 00:07:42.150 killing process with pid 1683808 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 1683808 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 1683808 00:07:42.150 00:07:42.150 real 0m5.488s 00:07:42.150 user 0m5.188s 00:07:42.150 sys 0m0.317s 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:42.150 13:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.150 ************************************ 00:07:42.150 END TEST skip_rpc 00:07:42.150 ************************************ 00:07:42.150 13:18:23 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:42.150 13:18:23 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:42.150 13:18:23 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:42.150 13:18:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.409 ************************************ 00:07:42.409 START TEST skip_rpc_with_json 00:07:42.409 ************************************ 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1684407 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1684407 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 1684407 ']' 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:42.409 13:18:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.409 [2024-10-07 13:18:23.929307] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:42.409 [2024-10-07 13:18:23.929401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1684407 ] 00:07:42.409 [2024-10-07 13:18:23.985619] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.409 [2024-10-07 13:18:24.095118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.669 [2024-10-07 13:18:24.356565] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:42.669 request: 00:07:42.669 { 00:07:42.669 "trtype": "tcp", 00:07:42.669 "method": "nvmf_get_transports", 00:07:42.669 "req_id": 1 00:07:42.669 } 00:07:42.669 Got JSON-RPC error response 00:07:42.669 response: 00:07:42.669 { 00:07:42.669 "code": -19, 00:07:42.669 "message": "No such device" 00:07:42.669 } 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.669 [2024-10-07 13:18:24.364692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.669 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:42.929 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.929 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:42.929 { 00:07:42.929 "subsystems": [ 00:07:42.929 { 00:07:42.929 "subsystem": "fsdev", 00:07:42.929 "config": [ 00:07:42.929 { 00:07:42.929 "method": "fsdev_set_opts", 00:07:42.929 "params": { 00:07:42.929 "fsdev_io_pool_size": 65535, 00:07:42.929 "fsdev_io_cache_size": 256 00:07:42.929 } 00:07:42.929 } 00:07:42.929 ] 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "subsystem": "vfio_user_target", 00:07:42.929 "config": null 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "subsystem": "keyring", 00:07:42.929 "config": [] 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "subsystem": "iobuf", 00:07:42.929 "config": [ 00:07:42.929 { 00:07:42.929 "method": "iobuf_set_options", 00:07:42.929 "params": { 00:07:42.929 "small_pool_count": 8192, 00:07:42.929 "large_pool_count": 1024, 00:07:42.929 "small_bufsize": 8192, 00:07:42.929 "large_bufsize": 135168 00:07:42.929 } 00:07:42.929 } 00:07:42.929 ] 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "subsystem": "sock", 00:07:42.929 "config": [ 00:07:42.929 { 00:07:42.929 "method": "sock_set_default_impl", 00:07:42.929 "params": { 00:07:42.929 "impl_name": "posix" 00:07:42.929 } 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "method": "sock_impl_set_options", 00:07:42.929 "params": { 00:07:42.929 "impl_name": "ssl", 00:07:42.929 "recv_buf_size": 4096, 00:07:42.929 "send_buf_size": 4096, 00:07:42.929 "enable_recv_pipe": true, 00:07:42.929 "enable_quickack": false, 00:07:42.929 "enable_placement_id": 0, 00:07:42.929 "enable_zerocopy_send_server": true, 00:07:42.929 "enable_zerocopy_send_client": false, 00:07:42.929 "zerocopy_threshold": 0, 00:07:42.929 "tls_version": 0, 00:07:42.929 "enable_ktls": false 00:07:42.929 } 00:07:42.929 }, 00:07:42.929 { 00:07:42.929 "method": "sock_impl_set_options", 00:07:42.929 "params": { 00:07:42.929 "impl_name": "posix", 00:07:42.929 "recv_buf_size": 2097152, 00:07:42.929 "send_buf_size": 2097152, 00:07:42.929 "enable_recv_pipe": true, 00:07:42.930 "enable_quickack": false, 00:07:42.930 "enable_placement_id": 0, 00:07:42.930 "enable_zerocopy_send_server": true, 00:07:42.930 "enable_zerocopy_send_client": false, 00:07:42.930 "zerocopy_threshold": 0, 00:07:42.930 "tls_version": 0, 00:07:42.930 "enable_ktls": false 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "vmd", 00:07:42.930 "config": [] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "accel", 00:07:42.930 "config": [ 00:07:42.930 { 00:07:42.930 "method": "accel_set_options", 00:07:42.930 "params": { 00:07:42.930 "small_cache_size": 128, 00:07:42.930 "large_cache_size": 16, 00:07:42.930 "task_count": 2048, 00:07:42.930 "sequence_count": 2048, 00:07:42.930 "buf_count": 2048 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "bdev", 00:07:42.930 "config": [ 00:07:42.930 { 00:07:42.930 "method": "bdev_set_options", 00:07:42.930 "params": { 00:07:42.930 "bdev_io_pool_size": 65535, 00:07:42.930 "bdev_io_cache_size": 256, 00:07:42.930 "bdev_auto_examine": true, 00:07:42.930 "iobuf_small_cache_size": 128, 00:07:42.930 "iobuf_large_cache_size": 16 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "bdev_raid_set_options", 00:07:42.930 "params": { 00:07:42.930 "process_window_size_kb": 1024, 00:07:42.930 "process_max_bandwidth_mb_sec": 0 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "bdev_iscsi_set_options", 00:07:42.930 "params": { 00:07:42.930 "timeout_sec": 30 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "bdev_nvme_set_options", 00:07:42.930 "params": { 00:07:42.930 "action_on_timeout": "none", 00:07:42.930 "timeout_us": 0, 00:07:42.930 "timeout_admin_us": 0, 00:07:42.930 "keep_alive_timeout_ms": 10000, 00:07:42.930 "arbitration_burst": 0, 00:07:42.930 "low_priority_weight": 0, 00:07:42.930 "medium_priority_weight": 0, 00:07:42.930 "high_priority_weight": 0, 00:07:42.930 "nvme_adminq_poll_period_us": 10000, 00:07:42.930 "nvme_ioq_poll_period_us": 0, 00:07:42.930 "io_queue_requests": 0, 00:07:42.930 "delay_cmd_submit": true, 00:07:42.930 "transport_retry_count": 4, 00:07:42.930 "bdev_retry_count": 3, 00:07:42.930 "transport_ack_timeout": 0, 00:07:42.930 "ctrlr_loss_timeout_sec": 0, 00:07:42.930 "reconnect_delay_sec": 0, 00:07:42.930 "fast_io_fail_timeout_sec": 0, 00:07:42.930 "disable_auto_failback": false, 00:07:42.930 "generate_uuids": false, 00:07:42.930 "transport_tos": 0, 00:07:42.930 "nvme_error_stat": false, 00:07:42.930 "rdma_srq_size": 0, 00:07:42.930 "io_path_stat": false, 00:07:42.930 "allow_accel_sequence": false, 00:07:42.930 "rdma_max_cq_size": 0, 00:07:42.930 "rdma_cm_event_timeout_ms": 0, 00:07:42.930 "dhchap_digests": [ 00:07:42.930 "sha256", 00:07:42.930 "sha384", 00:07:42.930 "sha512" 00:07:42.930 ], 00:07:42.930 "dhchap_dhgroups": [ 00:07:42.930 "null", 00:07:42.930 "ffdhe2048", 00:07:42.930 "ffdhe3072", 00:07:42.930 "ffdhe4096", 00:07:42.930 "ffdhe6144", 00:07:42.930 "ffdhe8192" 00:07:42.930 ] 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "bdev_nvme_set_hotplug", 00:07:42.930 "params": { 00:07:42.930 "period_us": 100000, 00:07:42.930 "enable": false 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "bdev_wait_for_examine" 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "scsi", 00:07:42.930 "config": null 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "scheduler", 00:07:42.930 "config": [ 00:07:42.930 { 00:07:42.930 "method": "framework_set_scheduler", 00:07:42.930 "params": { 00:07:42.930 "name": "static" 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "vhost_scsi", 00:07:42.930 "config": [] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "vhost_blk", 00:07:42.930 "config": [] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "ublk", 00:07:42.930 "config": [] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "nbd", 00:07:42.930 "config": [] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "nvmf", 00:07:42.930 "config": [ 00:07:42.930 { 00:07:42.930 "method": "nvmf_set_config", 00:07:42.930 "params": { 00:07:42.930 "discovery_filter": "match_any", 00:07:42.930 "admin_cmd_passthru": { 00:07:42.930 "identify_ctrlr": false 00:07:42.930 }, 00:07:42.930 "dhchap_digests": [ 00:07:42.930 "sha256", 00:07:42.930 "sha384", 00:07:42.930 "sha512" 00:07:42.930 ], 00:07:42.930 "dhchap_dhgroups": [ 00:07:42.930 "null", 00:07:42.930 "ffdhe2048", 00:07:42.930 "ffdhe3072", 00:07:42.930 "ffdhe4096", 00:07:42.930 "ffdhe6144", 00:07:42.930 "ffdhe8192" 00:07:42.930 ] 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "nvmf_set_max_subsystems", 00:07:42.930 "params": { 00:07:42.930 "max_subsystems": 1024 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "nvmf_set_crdt", 00:07:42.930 "params": { 00:07:42.930 "crdt1": 0, 00:07:42.930 "crdt2": 0, 00:07:42.930 "crdt3": 0 00:07:42.930 } 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "method": "nvmf_create_transport", 00:07:42.930 "params": { 00:07:42.930 "trtype": "TCP", 00:07:42.930 "max_queue_depth": 128, 00:07:42.930 "max_io_qpairs_per_ctrlr": 127, 00:07:42.930 "in_capsule_data_size": 4096, 00:07:42.930 "max_io_size": 131072, 00:07:42.930 "io_unit_size": 131072, 00:07:42.930 "max_aq_depth": 128, 00:07:42.930 "num_shared_buffers": 511, 00:07:42.930 "buf_cache_size": 4294967295, 00:07:42.930 "dif_insert_or_strip": false, 00:07:42.930 "zcopy": false, 00:07:42.930 "c2h_success": true, 00:07:42.930 "sock_priority": 0, 00:07:42.930 "abort_timeout_sec": 1, 00:07:42.930 "ack_timeout": 0, 00:07:42.930 "data_wr_pool_size": 0 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 }, 00:07:42.930 { 00:07:42.930 "subsystem": "iscsi", 00:07:42.930 "config": [ 00:07:42.930 { 00:07:42.930 "method": "iscsi_set_options", 00:07:42.930 "params": { 00:07:42.930 "node_base": "iqn.2016-06.io.spdk", 00:07:42.930 "max_sessions": 128, 00:07:42.930 "max_connections_per_session": 2, 00:07:42.930 "max_queue_depth": 64, 00:07:42.930 "default_time2wait": 2, 00:07:42.930 "default_time2retain": 20, 00:07:42.930 "first_burst_length": 8192, 00:07:42.930 "immediate_data": true, 00:07:42.930 "allow_duplicated_isid": false, 00:07:42.930 "error_recovery_level": 0, 00:07:42.930 "nop_timeout": 60, 00:07:42.930 "nop_in_interval": 30, 00:07:42.930 "disable_chap": false, 00:07:42.930 "require_chap": false, 00:07:42.930 "mutual_chap": false, 00:07:42.930 "chap_group": 0, 00:07:42.930 "max_large_datain_per_connection": 64, 00:07:42.930 "max_r2t_per_connection": 4, 00:07:42.930 "pdu_pool_size": 36864, 00:07:42.930 "immediate_data_pool_size": 16384, 00:07:42.930 "data_out_pool_size": 2048 00:07:42.930 } 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 } 00:07:42.930 ] 00:07:42.930 } 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1684407 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1684407 ']' 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1684407 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684407 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684407' 00:07:42.930 killing process with pid 1684407 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1684407 00:07:42.930 13:18:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1684407 00:07:43.500 13:18:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1684557 00:07:43.500 13:18:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:43.500 13:18:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1684557 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 1684557 ']' 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 1684557 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1684557 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1684557' 00:07:48.775 killing process with pid 1684557 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 1684557 00:07:48.775 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 1684557 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:49.036 00:07:49.036 real 0m6.638s 00:07:49.036 user 0m6.275s 00:07:49.036 sys 0m0.687s 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:49.036 ************************************ 00:07:49.036 END TEST skip_rpc_with_json 00:07:49.036 ************************************ 00:07:49.036 13:18:30 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.036 ************************************ 00:07:49.036 START TEST skip_rpc_with_delay 00:07:49.036 ************************************ 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:07:49.036 [2024-10-07 13:18:30.621189] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:07:49.036 [2024-10-07 13:18:30.621291] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.036 00:07:49.036 real 0m0.073s 00:07:49.036 user 0m0.051s 00:07:49.036 sys 0m0.022s 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.036 13:18:30 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:07:49.036 ************************************ 00:07:49.036 END TEST skip_rpc_with_delay 00:07:49.036 ************************************ 00:07:49.036 13:18:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:07:49.036 13:18:30 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:07:49.036 13:18:30 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.036 13:18:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.036 ************************************ 00:07:49.036 START TEST exit_on_failed_rpc_init 00:07:49.036 ************************************ 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1685299 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1685299 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 1685299 ']' 00:07:49.036 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.037 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.037 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.037 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.037 13:18:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:49.037 [2024-10-07 13:18:30.745108] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:49.037 [2024-10-07 13:18:30.745197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685299 ] 00:07:49.297 [2024-10-07 13:18:30.801594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.297 [2024-10-07 13:18:30.910436] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:07:49.557 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:07:49.557 [2024-10-07 13:18:31.226751] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:49.557 [2024-10-07 13:18:31.226839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685350 ] 00:07:49.816 [2024-10-07 13:18:31.281683] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.816 [2024-10-07 13:18:31.391110] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.816 [2024-10-07 13:18:31.391231] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:07:49.816 [2024-10-07 13:18:31.391249] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:07:49.816 [2024-10-07 13:18:31.391260] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1685299 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 1685299 ']' 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 1685299 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.816 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1685299 00:07:50.074 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:50.074 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:50.074 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1685299' 00:07:50.074 killing process with pid 1685299 00:07:50.074 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 1685299 00:07:50.074 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 1685299 00:07:50.333 00:07:50.333 real 0m1.310s 00:07:50.333 user 0m1.502s 00:07:50.333 sys 0m0.450s 00:07:50.333 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.333 13:18:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 ************************************ 00:07:50.333 END TEST exit_on_failed_rpc_init 00:07:50.333 ************************************ 00:07:50.333 13:18:32 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:50.333 00:07:50.333 real 0m13.864s 00:07:50.333 user 0m13.195s 00:07:50.333 sys 0m1.670s 00:07:50.333 13:18:32 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.333 13:18:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.333 ************************************ 00:07:50.333 END TEST skip_rpc 00:07:50.333 ************************************ 00:07:50.592 13:18:32 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:50.592 13:18:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.592 13:18:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.592 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 ************************************ 00:07:50.592 START TEST rpc_client 00:07:50.592 ************************************ 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:07:50.592 * Looking for test storage... 00:07:50.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@345 -- # : 1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@353 -- # local d=1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@355 -- # echo 1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@353 -- # local d=2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@355 -- # echo 2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.592 13:18:32 rpc_client -- scripts/common.sh@368 -- # return 0 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.592 --rc genhtml_branch_coverage=1 00:07:50.592 --rc genhtml_function_coverage=1 00:07:50.592 --rc genhtml_legend=1 00:07:50.592 --rc geninfo_all_blocks=1 00:07:50.592 --rc geninfo_unexecuted_blocks=1 00:07:50.592 00:07:50.592 ' 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.592 --rc genhtml_branch_coverage=1 00:07:50.592 --rc genhtml_function_coverage=1 00:07:50.592 --rc genhtml_legend=1 00:07:50.592 --rc geninfo_all_blocks=1 00:07:50.592 --rc geninfo_unexecuted_blocks=1 00:07:50.592 00:07:50.592 ' 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.592 --rc genhtml_branch_coverage=1 00:07:50.592 --rc genhtml_function_coverage=1 00:07:50.592 --rc genhtml_legend=1 00:07:50.592 --rc geninfo_all_blocks=1 00:07:50.592 --rc geninfo_unexecuted_blocks=1 00:07:50.592 00:07:50.592 ' 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.592 --rc genhtml_branch_coverage=1 00:07:50.592 --rc genhtml_function_coverage=1 00:07:50.592 --rc genhtml_legend=1 00:07:50.592 --rc geninfo_all_blocks=1 00:07:50.592 --rc geninfo_unexecuted_blocks=1 00:07:50.592 00:07:50.592 ' 00:07:50.592 13:18:32 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:07:50.592 OK 00:07:50.592 13:18:32 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:07:50.592 00:07:50.592 real 0m0.167s 00:07:50.592 user 0m0.102s 00:07:50.592 sys 0m0.074s 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.592 13:18:32 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 ************************************ 00:07:50.592 END TEST rpc_client 00:07:50.592 ************************************ 00:07:50.592 13:18:32 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:50.592 13:18:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:50.592 13:18:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.592 13:18:32 -- common/autotest_common.sh@10 -- # set +x 00:07:50.592 ************************************ 00:07:50.592 START TEST json_config 00:07:50.592 ************************************ 00:07:50.592 13:18:32 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.854 13:18:32 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.854 13:18:32 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.854 13:18:32 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.854 13:18:32 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.854 13:18:32 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.854 13:18:32 json_config -- scripts/common.sh@344 -- # case "$op" in 00:07:50.854 13:18:32 json_config -- scripts/common.sh@345 -- # : 1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.854 13:18:32 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.854 13:18:32 json_config -- scripts/common.sh@365 -- # decimal 1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@353 -- # local d=1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.854 13:18:32 json_config -- scripts/common.sh@355 -- # echo 1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.854 13:18:32 json_config -- scripts/common.sh@366 -- # decimal 2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@353 -- # local d=2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.854 13:18:32 json_config -- scripts/common.sh@355 -- # echo 2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.854 13:18:32 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.854 13:18:32 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.854 13:18:32 json_config -- scripts/common.sh@368 -- # return 0 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:50.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.854 --rc genhtml_branch_coverage=1 00:07:50.854 --rc genhtml_function_coverage=1 00:07:50.854 --rc genhtml_legend=1 00:07:50.854 --rc geninfo_all_blocks=1 00:07:50.854 --rc geninfo_unexecuted_blocks=1 00:07:50.854 00:07:50.854 ' 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:50.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.854 --rc genhtml_branch_coverage=1 00:07:50.854 --rc genhtml_function_coverage=1 00:07:50.854 --rc genhtml_legend=1 00:07:50.854 --rc geninfo_all_blocks=1 00:07:50.854 --rc geninfo_unexecuted_blocks=1 00:07:50.854 00:07:50.854 ' 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:50.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.854 --rc genhtml_branch_coverage=1 00:07:50.854 --rc genhtml_function_coverage=1 00:07:50.854 --rc genhtml_legend=1 00:07:50.854 --rc geninfo_all_blocks=1 00:07:50.854 --rc geninfo_unexecuted_blocks=1 00:07:50.854 00:07:50.854 ' 00:07:50.854 13:18:32 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:50.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.854 --rc genhtml_branch_coverage=1 00:07:50.854 --rc genhtml_function_coverage=1 00:07:50.854 --rc genhtml_legend=1 00:07:50.854 --rc geninfo_all_blocks=1 00:07:50.854 --rc geninfo_unexecuted_blocks=1 00:07:50.854 00:07:50.854 ' 00:07:50.854 13:18:32 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@7 -- # uname -s 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.854 13:18:32 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.855 13:18:32 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.855 13:18:32 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.855 13:18:32 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.855 13:18:32 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.855 13:18:32 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.855 13:18:32 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.855 13:18:32 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.855 13:18:32 json_config -- paths/export.sh@5 -- # export PATH 00:07:50.855 13:18:32 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@51 -- # : 0 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.855 13:18:32 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:07:50.855 INFO: JSON configuration test init 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:50.855 13:18:32 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:07:50.855 13:18:32 json_config -- json_config/common.sh@9 -- # local app=target 00:07:50.855 13:18:32 json_config -- json_config/common.sh@10 -- # shift 00:07:50.855 13:18:32 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:50.855 13:18:32 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:50.855 13:18:32 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:07:50.855 13:18:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.855 13:18:32 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:50.855 13:18:32 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1685603 00:07:50.855 13:18:32 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:07:50.855 13:18:32 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:50.855 Waiting for target to run... 00:07:50.855 13:18:32 json_config -- json_config/common.sh@25 -- # waitforlisten 1685603 /var/tmp/spdk_tgt.sock 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@831 -- # '[' -z 1685603 ']' 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:50.855 13:18:32 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:50.856 13:18:32 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:50.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:50.856 13:18:32 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:50.856 13:18:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:50.856 [2024-10-07 13:18:32.493468] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:07:50.856 [2024-10-07 13:18:32.493561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1685603 ] 00:07:51.422 [2024-10-07 13:18:32.993264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.422 [2024-10-07 13:18:33.087537] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@864 -- # return 0 00:07:51.988 13:18:33 json_config -- json_config/common.sh@26 -- # echo '' 00:07:51.988 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.988 13:18:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:07:51.988 13:18:33 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:07:51.989 13:18:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:07:55.279 13:18:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@51 -- # local get_types 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@54 -- # sort 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@62 -- # return 0 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.279 13:18:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:07:55.279 13:18:36 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:55.279 13:18:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:07:55.538 MallocForNvmf0 00:07:55.538 13:18:37 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:55.538 13:18:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:07:55.797 MallocForNvmf1 00:07:55.797 13:18:37 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:07:55.797 13:18:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:07:56.116 [2024-10-07 13:18:37.748820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.116 13:18:37 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.116 13:18:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:56.400 13:18:38 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:56.400 13:18:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:07:56.657 13:18:38 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:56.657 13:18:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:07:56.915 13:18:38 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:56.915 13:18:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:07:57.172 [2024-10-07 13:18:38.824265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:07:57.172 13:18:38 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:07:57.172 13:18:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.172 13:18:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:57.172 13:18:38 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:07:57.172 13:18:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.172 13:18:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:57.172 13:18:38 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:07:57.172 13:18:38 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:57.172 13:18:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:07:57.431 MallocBdevForConfigChangeCheck 00:07:57.431 13:18:39 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:07:57.431 13:18:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.431 13:18:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:57.689 13:18:39 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:07:57.689 13:18:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:57.947 13:18:39 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:07:57.947 INFO: shutting down applications... 00:07:57.947 13:18:39 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:07:57.947 13:18:39 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:07:57.947 13:18:39 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:07:57.947 13:18:39 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:07:59.856 Calling clear_iscsi_subsystem 00:07:59.856 Calling clear_nvmf_subsystem 00:07:59.856 Calling clear_nbd_subsystem 00:07:59.856 Calling clear_ublk_subsystem 00:07:59.856 Calling clear_vhost_blk_subsystem 00:07:59.856 Calling clear_vhost_scsi_subsystem 00:07:59.856 Calling clear_bdev_subsystem 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@350 -- # count=100 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@352 -- # break 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:07:59.856 13:18:41 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:07:59.856 13:18:41 json_config -- json_config/common.sh@31 -- # local app=target 00:07:59.856 13:18:41 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:59.856 13:18:41 json_config -- json_config/common.sh@35 -- # [[ -n 1685603 ]] 00:07:59.856 13:18:41 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1685603 00:07:59.856 13:18:41 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:59.856 13:18:41 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:59.856 13:18:41 json_config -- json_config/common.sh@41 -- # kill -0 1685603 00:07:59.856 13:18:41 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:00.422 13:18:42 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:00.422 13:18:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:00.422 13:18:42 json_config -- json_config/common.sh@41 -- # kill -0 1685603 00:08:00.422 13:18:42 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:00.422 13:18:42 json_config -- json_config/common.sh@43 -- # break 00:08:00.422 13:18:42 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:00.422 13:18:42 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:00.422 SPDK target shutdown done 00:08:00.422 13:18:42 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:00.422 INFO: relaunching applications... 00:08:00.422 13:18:42 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:00.422 13:18:42 json_config -- json_config/common.sh@9 -- # local app=target 00:08:00.422 13:18:42 json_config -- json_config/common.sh@10 -- # shift 00:08:00.422 13:18:42 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:00.422 13:18:42 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:00.422 13:18:42 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:00.422 13:18:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.422 13:18:42 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:00.422 13:18:42 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1686863 00:08:00.422 13:18:42 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:00.422 13:18:42 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:00.422 Waiting for target to run... 00:08:00.422 13:18:42 json_config -- json_config/common.sh@25 -- # waitforlisten 1686863 /var/tmp/spdk_tgt.sock 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@831 -- # '[' -z 1686863 ']' 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:00.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.422 13:18:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:00.422 [2024-10-07 13:18:42.117537] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:00.422 [2024-10-07 13:18:42.117622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1686863 ] 00:08:00.989 [2024-10-07 13:18:42.622880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.248 [2024-10-07 13:18:42.716158] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.540 [2024-10-07 13:18:45.770611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.540 [2024-10-07 13:18:45.803042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:04.540 13:18:45 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:04.540 13:18:45 json_config -- common/autotest_common.sh@864 -- # return 0 00:08:04.540 13:18:45 json_config -- json_config/common.sh@26 -- # echo '' 00:08:04.540 00:08:04.540 13:18:45 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:04.540 13:18:45 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:04.540 INFO: Checking if target configuration is the same... 00:08:04.540 13:18:45 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:04.540 13:18:45 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:04.540 13:18:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:04.540 + '[' 2 -ne 2 ']' 00:08:04.540 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:04.540 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:04.540 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:04.540 +++ basename /dev/fd/62 00:08:04.540 ++ mktemp /tmp/62.XXX 00:08:04.540 + tmp_file_1=/tmp/62.FF9 00:08:04.540 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:04.540 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:04.540 + tmp_file_2=/tmp/spdk_tgt_config.json.ziE 00:08:04.540 + ret=0 00:08:04.540 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:04.540 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:04.798 + diff -u /tmp/62.FF9 /tmp/spdk_tgt_config.json.ziE 00:08:04.798 + echo 'INFO: JSON config files are the same' 00:08:04.798 INFO: JSON config files are the same 00:08:04.798 + rm /tmp/62.FF9 /tmp/spdk_tgt_config.json.ziE 00:08:04.798 + exit 0 00:08:04.798 13:18:46 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:04.798 13:18:46 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:04.798 INFO: changing configuration and checking if this can be detected... 00:08:04.798 13:18:46 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:04.798 13:18:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:05.056 13:18:46 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:05.056 13:18:46 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:05.056 13:18:46 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:05.056 + '[' 2 -ne 2 ']' 00:08:05.056 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:05.056 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:05.056 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:05.056 +++ basename /dev/fd/62 00:08:05.056 ++ mktemp /tmp/62.XXX 00:08:05.056 + tmp_file_1=/tmp/62.YBN 00:08:05.056 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:05.056 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:05.056 + tmp_file_2=/tmp/spdk_tgt_config.json.RhR 00:08:05.056 + ret=0 00:08:05.056 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:05.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:05.314 + diff -u /tmp/62.YBN /tmp/spdk_tgt_config.json.RhR 00:08:05.314 + ret=1 00:08:05.314 + echo '=== Start of file: /tmp/62.YBN ===' 00:08:05.314 + cat /tmp/62.YBN 00:08:05.314 + echo '=== End of file: /tmp/62.YBN ===' 00:08:05.314 + echo '' 00:08:05.314 + echo '=== Start of file: /tmp/spdk_tgt_config.json.RhR ===' 00:08:05.314 + cat /tmp/spdk_tgt_config.json.RhR 00:08:05.314 + echo '=== End of file: /tmp/spdk_tgt_config.json.RhR ===' 00:08:05.314 + echo '' 00:08:05.314 + rm /tmp/62.YBN /tmp/spdk_tgt_config.json.RhR 00:08:05.574 + exit 1 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:05.574 INFO: configuration change detected. 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@324 -- # [[ -n 1686863 ]] 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.574 13:18:47 json_config -- json_config/json_config.sh@330 -- # killprocess 1686863 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@950 -- # '[' -z 1686863 ']' 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@954 -- # kill -0 1686863 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@955 -- # uname 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1686863 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1686863' 00:08:05.574 killing process with pid 1686863 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@969 -- # kill 1686863 00:08:05.574 13:18:47 json_config -- common/autotest_common.sh@974 -- # wait 1686863 00:08:07.481 13:18:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:07.481 13:18:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:07.481 13:18:48 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.481 13:18:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.481 13:18:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:07.481 13:18:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:07.481 INFO: Success 00:08:07.481 00:08:07.481 real 0m16.401s 00:08:07.481 user 0m17.862s 00:08:07.481 sys 0m2.755s 00:08:07.481 13:18:48 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:07.481 13:18:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:07.481 ************************************ 00:08:07.481 END TEST json_config 00:08:07.481 ************************************ 00:08:07.481 13:18:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:07.481 13:18:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:07.481 13:18:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:07.481 13:18:48 -- common/autotest_common.sh@10 -- # set +x 00:08:07.481 ************************************ 00:08:07.481 START TEST json_config_extra_key 00:08:07.481 ************************************ 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.481 13:18:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.481 --rc genhtml_branch_coverage=1 00:08:07.481 --rc genhtml_function_coverage=1 00:08:07.481 --rc genhtml_legend=1 00:08:07.481 --rc geninfo_all_blocks=1 00:08:07.481 --rc geninfo_unexecuted_blocks=1 00:08:07.481 00:08:07.481 ' 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.481 --rc genhtml_branch_coverage=1 00:08:07.481 --rc genhtml_function_coverage=1 00:08:07.481 --rc genhtml_legend=1 00:08:07.481 --rc geninfo_all_blocks=1 00:08:07.481 --rc geninfo_unexecuted_blocks=1 00:08:07.481 00:08:07.481 ' 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.481 --rc genhtml_branch_coverage=1 00:08:07.481 --rc genhtml_function_coverage=1 00:08:07.481 --rc genhtml_legend=1 00:08:07.481 --rc geninfo_all_blocks=1 00:08:07.481 --rc geninfo_unexecuted_blocks=1 00:08:07.481 00:08:07.481 ' 00:08:07.481 13:18:48 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:07.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.481 --rc genhtml_branch_coverage=1 00:08:07.481 --rc genhtml_function_coverage=1 00:08:07.481 --rc genhtml_legend=1 00:08:07.481 --rc geninfo_all_blocks=1 00:08:07.481 --rc geninfo_unexecuted_blocks=1 00:08:07.481 00:08:07.481 ' 00:08:07.481 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.481 13:18:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:07.482 13:18:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.482 13:18:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.482 13:18:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.482 13:18:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.482 13:18:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.482 13:18:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.482 13:18:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.482 13:18:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:07.482 13:18:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.482 13:18:48 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:07.482 INFO: launching applications... 00:08:07.482 13:18:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1687749 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:07.482 Waiting for target to run... 00:08:07.482 13:18:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1687749 /var/tmp/spdk_tgt.sock 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 1687749 ']' 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:07.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.482 13:18:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:07.482 [2024-10-07 13:18:48.940322] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:07.482 [2024-10-07 13:18:48.940402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1687749 ] 00:08:07.742 [2024-10-07 13:18:49.262659] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.742 [2024-10-07 13:18:49.338893] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.310 13:18:49 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:08.310 13:18:49 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:08.310 00:08:08.310 13:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:08.310 INFO: shutting down applications... 00:08:08.310 13:18:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1687749 ]] 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1687749 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1687749 00:08:08.310 13:18:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:08.879 13:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:08.879 13:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:08.879 13:18:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1687749 00:08:08.879 13:18:50 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:09.445 13:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:09.445 13:18:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:09.446 13:18:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1687749 00:08:09.446 13:18:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:09.446 13:18:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:09.446 13:18:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:09.446 13:18:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:09.446 SPDK target shutdown done 00:08:09.446 13:18:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:09.446 Success 00:08:09.446 00:08:09.446 real 0m2.180s 00:08:09.446 user 0m1.730s 00:08:09.446 sys 0m0.458s 00:08:09.446 13:18:50 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.446 13:18:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:09.446 ************************************ 00:08:09.446 END TEST json_config_extra_key 00:08:09.446 ************************************ 00:08:09.446 13:18:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.446 13:18:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.446 13:18:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.446 13:18:50 -- common/autotest_common.sh@10 -- # set +x 00:08:09.446 ************************************ 00:08:09.446 START TEST alias_rpc 00:08:09.446 ************************************ 00:08:09.446 13:18:50 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:09.446 * Looking for test storage... 00:08:09.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.446 13:18:51 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:09.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.446 --rc genhtml_branch_coverage=1 00:08:09.446 --rc genhtml_function_coverage=1 00:08:09.446 --rc genhtml_legend=1 00:08:09.446 --rc geninfo_all_blocks=1 00:08:09.446 --rc geninfo_unexecuted_blocks=1 00:08:09.446 00:08:09.446 ' 00:08:09.446 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:09.446 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1688061 00:08:09.446 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:09.446 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1688061 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 1688061 ']' 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.446 13:18:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:09.705 [2024-10-07 13:18:51.174887] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:09.705 [2024-10-07 13:18:51.174986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688061 ] 00:08:09.705 [2024-10-07 13:18:51.233107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.705 [2024-10-07 13:18:51.341788] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.964 13:18:51 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:09.964 13:18:51 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:09.964 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:10.222 13:18:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1688061 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 1688061 ']' 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 1688061 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688061 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688061' 00:08:10.222 killing process with pid 1688061 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@969 -- # kill 1688061 00:08:10.222 13:18:51 alias_rpc -- common/autotest_common.sh@974 -- # wait 1688061 00:08:10.789 00:08:10.789 real 0m1.414s 00:08:10.789 user 0m1.510s 00:08:10.789 sys 0m0.460s 00:08:10.789 13:18:52 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.789 13:18:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 ************************************ 00:08:10.789 END TEST alias_rpc 00:08:10.789 ************************************ 00:08:10.789 13:18:52 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:10.789 13:18:52 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:10.789 13:18:52 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.789 13:18:52 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.789 13:18:52 -- common/autotest_common.sh@10 -- # set +x 00:08:10.789 ************************************ 00:08:10.789 START TEST spdkcli_tcp 00:08:10.789 ************************************ 00:08:10.789 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:10.789 * Looking for test storage... 00:08:10.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:10.789 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:10.789 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.789 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:11.047 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.047 13:18:52 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.048 13:18:52 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:11.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.048 --rc genhtml_branch_coverage=1 00:08:11.048 --rc genhtml_function_coverage=1 00:08:11.048 --rc genhtml_legend=1 00:08:11.048 --rc geninfo_all_blocks=1 00:08:11.048 --rc geninfo_unexecuted_blocks=1 00:08:11.048 00:08:11.048 ' 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:11.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.048 --rc genhtml_branch_coverage=1 00:08:11.048 --rc genhtml_function_coverage=1 00:08:11.048 --rc genhtml_legend=1 00:08:11.048 --rc geninfo_all_blocks=1 00:08:11.048 --rc geninfo_unexecuted_blocks=1 00:08:11.048 00:08:11.048 ' 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:11.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.048 --rc genhtml_branch_coverage=1 00:08:11.048 --rc genhtml_function_coverage=1 00:08:11.048 --rc genhtml_legend=1 00:08:11.048 --rc geninfo_all_blocks=1 00:08:11.048 --rc geninfo_unexecuted_blocks=1 00:08:11.048 00:08:11.048 ' 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:11.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.048 --rc genhtml_branch_coverage=1 00:08:11.048 --rc genhtml_function_coverage=1 00:08:11.048 --rc genhtml_legend=1 00:08:11.048 --rc geninfo_all_blocks=1 00:08:11.048 --rc geninfo_unexecuted_blocks=1 00:08:11.048 00:08:11.048 ' 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1688261 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:11.048 13:18:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1688261 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 1688261 ']' 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:11.048 13:18:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.048 [2024-10-07 13:18:52.635342] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:11.048 [2024-10-07 13:18:52.635428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688261 ] 00:08:11.048 [2024-10-07 13:18:52.689202] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:11.305 [2024-10-07 13:18:52.793022] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.305 [2024-10-07 13:18:52.793027] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.562 13:18:53 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:11.562 13:18:53 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:08:11.562 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1688377 00:08:11.562 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:11.562 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:11.820 [ 00:08:11.820 "bdev_malloc_delete", 00:08:11.820 "bdev_malloc_create", 00:08:11.820 "bdev_null_resize", 00:08:11.820 "bdev_null_delete", 00:08:11.820 "bdev_null_create", 00:08:11.820 "bdev_nvme_cuse_unregister", 00:08:11.820 "bdev_nvme_cuse_register", 00:08:11.820 "bdev_opal_new_user", 00:08:11.820 "bdev_opal_set_lock_state", 00:08:11.820 "bdev_opal_delete", 00:08:11.820 "bdev_opal_get_info", 00:08:11.820 "bdev_opal_create", 00:08:11.820 "bdev_nvme_opal_revert", 00:08:11.820 "bdev_nvme_opal_init", 00:08:11.820 "bdev_nvme_send_cmd", 00:08:11.820 "bdev_nvme_set_keys", 00:08:11.820 "bdev_nvme_get_path_iostat", 00:08:11.820 "bdev_nvme_get_mdns_discovery_info", 00:08:11.820 "bdev_nvme_stop_mdns_discovery", 00:08:11.820 "bdev_nvme_start_mdns_discovery", 00:08:11.820 "bdev_nvme_set_multipath_policy", 00:08:11.820 "bdev_nvme_set_preferred_path", 00:08:11.820 "bdev_nvme_get_io_paths", 00:08:11.820 "bdev_nvme_remove_error_injection", 00:08:11.820 "bdev_nvme_add_error_injection", 00:08:11.820 "bdev_nvme_get_discovery_info", 00:08:11.820 "bdev_nvme_stop_discovery", 00:08:11.820 "bdev_nvme_start_discovery", 00:08:11.820 "bdev_nvme_get_controller_health_info", 00:08:11.820 "bdev_nvme_disable_controller", 00:08:11.820 "bdev_nvme_enable_controller", 00:08:11.820 "bdev_nvme_reset_controller", 00:08:11.820 "bdev_nvme_get_transport_statistics", 00:08:11.820 "bdev_nvme_apply_firmware", 00:08:11.820 "bdev_nvme_detach_controller", 00:08:11.820 "bdev_nvme_get_controllers", 00:08:11.820 "bdev_nvme_attach_controller", 00:08:11.820 "bdev_nvme_set_hotplug", 00:08:11.820 "bdev_nvme_set_options", 00:08:11.820 "bdev_passthru_delete", 00:08:11.820 "bdev_passthru_create", 00:08:11.820 "bdev_lvol_set_parent_bdev", 00:08:11.820 "bdev_lvol_set_parent", 00:08:11.820 "bdev_lvol_check_shallow_copy", 00:08:11.820 "bdev_lvol_start_shallow_copy", 00:08:11.820 "bdev_lvol_grow_lvstore", 00:08:11.820 "bdev_lvol_get_lvols", 00:08:11.820 "bdev_lvol_get_lvstores", 00:08:11.820 "bdev_lvol_delete", 00:08:11.820 "bdev_lvol_set_read_only", 00:08:11.820 "bdev_lvol_resize", 00:08:11.820 "bdev_lvol_decouple_parent", 00:08:11.820 "bdev_lvol_inflate", 00:08:11.820 "bdev_lvol_rename", 00:08:11.820 "bdev_lvol_clone_bdev", 00:08:11.820 "bdev_lvol_clone", 00:08:11.820 "bdev_lvol_snapshot", 00:08:11.820 "bdev_lvol_create", 00:08:11.820 "bdev_lvol_delete_lvstore", 00:08:11.820 "bdev_lvol_rename_lvstore", 00:08:11.820 "bdev_lvol_create_lvstore", 00:08:11.820 "bdev_raid_set_options", 00:08:11.820 "bdev_raid_remove_base_bdev", 00:08:11.820 "bdev_raid_add_base_bdev", 00:08:11.820 "bdev_raid_delete", 00:08:11.820 "bdev_raid_create", 00:08:11.820 "bdev_raid_get_bdevs", 00:08:11.820 "bdev_error_inject_error", 00:08:11.820 "bdev_error_delete", 00:08:11.820 "bdev_error_create", 00:08:11.820 "bdev_split_delete", 00:08:11.820 "bdev_split_create", 00:08:11.820 "bdev_delay_delete", 00:08:11.820 "bdev_delay_create", 00:08:11.820 "bdev_delay_update_latency", 00:08:11.820 "bdev_zone_block_delete", 00:08:11.820 "bdev_zone_block_create", 00:08:11.820 "blobfs_create", 00:08:11.820 "blobfs_detect", 00:08:11.820 "blobfs_set_cache_size", 00:08:11.820 "bdev_aio_delete", 00:08:11.820 "bdev_aio_rescan", 00:08:11.820 "bdev_aio_create", 00:08:11.820 "bdev_ftl_set_property", 00:08:11.820 "bdev_ftl_get_properties", 00:08:11.820 "bdev_ftl_get_stats", 00:08:11.820 "bdev_ftl_unmap", 00:08:11.820 "bdev_ftl_unload", 00:08:11.820 "bdev_ftl_delete", 00:08:11.820 "bdev_ftl_load", 00:08:11.820 "bdev_ftl_create", 00:08:11.820 "bdev_virtio_attach_controller", 00:08:11.820 "bdev_virtio_scsi_get_devices", 00:08:11.820 "bdev_virtio_detach_controller", 00:08:11.820 "bdev_virtio_blk_set_hotplug", 00:08:11.820 "bdev_iscsi_delete", 00:08:11.820 "bdev_iscsi_create", 00:08:11.820 "bdev_iscsi_set_options", 00:08:11.820 "accel_error_inject_error", 00:08:11.820 "ioat_scan_accel_module", 00:08:11.820 "dsa_scan_accel_module", 00:08:11.820 "iaa_scan_accel_module", 00:08:11.820 "vfu_virtio_create_fs_endpoint", 00:08:11.820 "vfu_virtio_create_scsi_endpoint", 00:08:11.820 "vfu_virtio_scsi_remove_target", 00:08:11.820 "vfu_virtio_scsi_add_target", 00:08:11.820 "vfu_virtio_create_blk_endpoint", 00:08:11.820 "vfu_virtio_delete_endpoint", 00:08:11.820 "keyring_file_remove_key", 00:08:11.820 "keyring_file_add_key", 00:08:11.820 "keyring_linux_set_options", 00:08:11.820 "fsdev_aio_delete", 00:08:11.820 "fsdev_aio_create", 00:08:11.820 "iscsi_get_histogram", 00:08:11.820 "iscsi_enable_histogram", 00:08:11.820 "iscsi_set_options", 00:08:11.820 "iscsi_get_auth_groups", 00:08:11.820 "iscsi_auth_group_remove_secret", 00:08:11.820 "iscsi_auth_group_add_secret", 00:08:11.820 "iscsi_delete_auth_group", 00:08:11.820 "iscsi_create_auth_group", 00:08:11.820 "iscsi_set_discovery_auth", 00:08:11.820 "iscsi_get_options", 00:08:11.820 "iscsi_target_node_request_logout", 00:08:11.820 "iscsi_target_node_set_redirect", 00:08:11.820 "iscsi_target_node_set_auth", 00:08:11.820 "iscsi_target_node_add_lun", 00:08:11.820 "iscsi_get_stats", 00:08:11.820 "iscsi_get_connections", 00:08:11.820 "iscsi_portal_group_set_auth", 00:08:11.820 "iscsi_start_portal_group", 00:08:11.820 "iscsi_delete_portal_group", 00:08:11.820 "iscsi_create_portal_group", 00:08:11.820 "iscsi_get_portal_groups", 00:08:11.821 "iscsi_delete_target_node", 00:08:11.821 "iscsi_target_node_remove_pg_ig_maps", 00:08:11.821 "iscsi_target_node_add_pg_ig_maps", 00:08:11.821 "iscsi_create_target_node", 00:08:11.821 "iscsi_get_target_nodes", 00:08:11.821 "iscsi_delete_initiator_group", 00:08:11.821 "iscsi_initiator_group_remove_initiators", 00:08:11.821 "iscsi_initiator_group_add_initiators", 00:08:11.821 "iscsi_create_initiator_group", 00:08:11.821 "iscsi_get_initiator_groups", 00:08:11.821 "nvmf_set_crdt", 00:08:11.821 "nvmf_set_config", 00:08:11.821 "nvmf_set_max_subsystems", 00:08:11.821 "nvmf_stop_mdns_prr", 00:08:11.821 "nvmf_publish_mdns_prr", 00:08:11.821 "nvmf_subsystem_get_listeners", 00:08:11.821 "nvmf_subsystem_get_qpairs", 00:08:11.821 "nvmf_subsystem_get_controllers", 00:08:11.821 "nvmf_get_stats", 00:08:11.821 "nvmf_get_transports", 00:08:11.821 "nvmf_create_transport", 00:08:11.821 "nvmf_get_targets", 00:08:11.821 "nvmf_delete_target", 00:08:11.821 "nvmf_create_target", 00:08:11.821 "nvmf_subsystem_allow_any_host", 00:08:11.821 "nvmf_subsystem_set_keys", 00:08:11.821 "nvmf_subsystem_remove_host", 00:08:11.821 "nvmf_subsystem_add_host", 00:08:11.821 "nvmf_ns_remove_host", 00:08:11.821 "nvmf_ns_add_host", 00:08:11.821 "nvmf_subsystem_remove_ns", 00:08:11.821 "nvmf_subsystem_set_ns_ana_group", 00:08:11.821 "nvmf_subsystem_add_ns", 00:08:11.821 "nvmf_subsystem_listener_set_ana_state", 00:08:11.821 "nvmf_discovery_get_referrals", 00:08:11.821 "nvmf_discovery_remove_referral", 00:08:11.821 "nvmf_discovery_add_referral", 00:08:11.821 "nvmf_subsystem_remove_listener", 00:08:11.821 "nvmf_subsystem_add_listener", 00:08:11.821 "nvmf_delete_subsystem", 00:08:11.821 "nvmf_create_subsystem", 00:08:11.821 "nvmf_get_subsystems", 00:08:11.821 "env_dpdk_get_mem_stats", 00:08:11.821 "nbd_get_disks", 00:08:11.821 "nbd_stop_disk", 00:08:11.821 "nbd_start_disk", 00:08:11.821 "ublk_recover_disk", 00:08:11.821 "ublk_get_disks", 00:08:11.821 "ublk_stop_disk", 00:08:11.821 "ublk_start_disk", 00:08:11.821 "ublk_destroy_target", 00:08:11.821 "ublk_create_target", 00:08:11.821 "virtio_blk_create_transport", 00:08:11.821 "virtio_blk_get_transports", 00:08:11.821 "vhost_controller_set_coalescing", 00:08:11.821 "vhost_get_controllers", 00:08:11.821 "vhost_delete_controller", 00:08:11.821 "vhost_create_blk_controller", 00:08:11.821 "vhost_scsi_controller_remove_target", 00:08:11.821 "vhost_scsi_controller_add_target", 00:08:11.821 "vhost_start_scsi_controller", 00:08:11.821 "vhost_create_scsi_controller", 00:08:11.821 "thread_set_cpumask", 00:08:11.821 "scheduler_set_options", 00:08:11.821 "framework_get_governor", 00:08:11.821 "framework_get_scheduler", 00:08:11.821 "framework_set_scheduler", 00:08:11.821 "framework_get_reactors", 00:08:11.821 "thread_get_io_channels", 00:08:11.821 "thread_get_pollers", 00:08:11.821 "thread_get_stats", 00:08:11.821 "framework_monitor_context_switch", 00:08:11.821 "spdk_kill_instance", 00:08:11.821 "log_enable_timestamps", 00:08:11.821 "log_get_flags", 00:08:11.821 "log_clear_flag", 00:08:11.821 "log_set_flag", 00:08:11.821 "log_get_level", 00:08:11.821 "log_set_level", 00:08:11.821 "log_get_print_level", 00:08:11.821 "log_set_print_level", 00:08:11.821 "framework_enable_cpumask_locks", 00:08:11.821 "framework_disable_cpumask_locks", 00:08:11.821 "framework_wait_init", 00:08:11.821 "framework_start_init", 00:08:11.821 "scsi_get_devices", 00:08:11.821 "bdev_get_histogram", 00:08:11.821 "bdev_enable_histogram", 00:08:11.821 "bdev_set_qos_limit", 00:08:11.821 "bdev_set_qd_sampling_period", 00:08:11.821 "bdev_get_bdevs", 00:08:11.821 "bdev_reset_iostat", 00:08:11.821 "bdev_get_iostat", 00:08:11.821 "bdev_examine", 00:08:11.821 "bdev_wait_for_examine", 00:08:11.821 "bdev_set_options", 00:08:11.821 "accel_get_stats", 00:08:11.821 "accel_set_options", 00:08:11.821 "accel_set_driver", 00:08:11.821 "accel_crypto_key_destroy", 00:08:11.821 "accel_crypto_keys_get", 00:08:11.821 "accel_crypto_key_create", 00:08:11.821 "accel_assign_opc", 00:08:11.821 "accel_get_module_info", 00:08:11.821 "accel_get_opc_assignments", 00:08:11.821 "vmd_rescan", 00:08:11.821 "vmd_remove_device", 00:08:11.821 "vmd_enable", 00:08:11.821 "sock_get_default_impl", 00:08:11.821 "sock_set_default_impl", 00:08:11.821 "sock_impl_set_options", 00:08:11.821 "sock_impl_get_options", 00:08:11.821 "iobuf_get_stats", 00:08:11.821 "iobuf_set_options", 00:08:11.821 "keyring_get_keys", 00:08:11.821 "vfu_tgt_set_base_path", 00:08:11.821 "framework_get_pci_devices", 00:08:11.821 "framework_get_config", 00:08:11.821 "framework_get_subsystems", 00:08:11.821 "fsdev_set_opts", 00:08:11.821 "fsdev_get_opts", 00:08:11.821 "trace_get_info", 00:08:11.821 "trace_get_tpoint_group_mask", 00:08:11.821 "trace_disable_tpoint_group", 00:08:11.821 "trace_enable_tpoint_group", 00:08:11.821 "trace_clear_tpoint_mask", 00:08:11.821 "trace_set_tpoint_mask", 00:08:11.821 "notify_get_notifications", 00:08:11.821 "notify_get_types", 00:08:11.821 "spdk_get_version", 00:08:11.821 "rpc_get_methods" 00:08:11.821 ] 00:08:11.821 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:11.821 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:11.821 13:18:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1688261 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 1688261 ']' 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 1688261 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688261 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688261' 00:08:11.821 killing process with pid 1688261 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 1688261 00:08:11.821 13:18:53 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 1688261 00:08:12.389 00:08:12.389 real 0m1.402s 00:08:12.389 user 0m2.406s 00:08:12.389 sys 0m0.482s 00:08:12.389 13:18:53 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:12.389 13:18:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:12.389 ************************************ 00:08:12.389 END TEST spdkcli_tcp 00:08:12.389 ************************************ 00:08:12.389 13:18:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:12.389 13:18:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:12.389 13:18:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:12.389 13:18:53 -- common/autotest_common.sh@10 -- # set +x 00:08:12.389 ************************************ 00:08:12.389 START TEST dpdk_mem_utility 00:08:12.389 ************************************ 00:08:12.389 13:18:53 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:12.389 * Looking for test storage... 00:08:12.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:12.389 13:18:53 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:12.389 13:18:53 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:08:12.389 13:18:53 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:12.389 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:12.389 13:18:54 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:12.389 13:18:54 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:12.389 13:18:54 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:12.390 13:18:54 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:12.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.390 --rc genhtml_branch_coverage=1 00:08:12.390 --rc genhtml_function_coverage=1 00:08:12.390 --rc genhtml_legend=1 00:08:12.390 --rc geninfo_all_blocks=1 00:08:12.390 --rc geninfo_unexecuted_blocks=1 00:08:12.390 00:08:12.390 ' 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:12.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.390 --rc genhtml_branch_coverage=1 00:08:12.390 --rc genhtml_function_coverage=1 00:08:12.390 --rc genhtml_legend=1 00:08:12.390 --rc geninfo_all_blocks=1 00:08:12.390 --rc geninfo_unexecuted_blocks=1 00:08:12.390 00:08:12.390 ' 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:12.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.390 --rc genhtml_branch_coverage=1 00:08:12.390 --rc genhtml_function_coverage=1 00:08:12.390 --rc genhtml_legend=1 00:08:12.390 --rc geninfo_all_blocks=1 00:08:12.390 --rc geninfo_unexecuted_blocks=1 00:08:12.390 00:08:12.390 ' 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:12.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:12.390 --rc genhtml_branch_coverage=1 00:08:12.390 --rc genhtml_function_coverage=1 00:08:12.390 --rc genhtml_legend=1 00:08:12.390 --rc geninfo_all_blocks=1 00:08:12.390 --rc geninfo_unexecuted_blocks=1 00:08:12.390 00:08:12.390 ' 00:08:12.390 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:12.390 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1688463 00:08:12.390 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:12.390 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1688463 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 1688463 ']' 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.390 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:12.390 [2024-10-07 13:18:54.090688] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:12.390 [2024-10-07 13:18:54.090777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688463 ] 00:08:12.649 [2024-10-07 13:18:54.150208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.649 [2024-10-07 13:18:54.261145] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.907 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.907 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:08:12.907 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:12.907 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:12.907 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.907 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:12.907 { 00:08:12.907 "filename": "/tmp/spdk_mem_dump.txt" 00:08:12.907 } 00:08:12.907 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.907 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:12.907 DPDK memory size 860.000000 MiB in 1 heap(s) 00:08:12.907 1 heaps totaling size 860.000000 MiB 00:08:12.907 size: 860.000000 MiB heap id: 0 00:08:12.907 end heaps---------- 00:08:12.907 9 mempools totaling size 642.649841 MiB 00:08:12.907 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:12.907 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:12.907 size: 92.545471 MiB name: bdev_io_1688463 00:08:12.907 size: 51.011292 MiB name: evtpool_1688463 00:08:12.907 size: 50.003479 MiB name: msgpool_1688463 00:08:12.907 size: 36.509338 MiB name: fsdev_io_1688463 00:08:12.907 size: 21.763794 MiB name: PDU_Pool 00:08:12.907 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:12.907 size: 0.026123 MiB name: Session_Pool 00:08:12.907 end mempools------- 00:08:12.907 6 memzones totaling size 4.142822 MiB 00:08:12.907 size: 1.000366 MiB name: RG_ring_0_1688463 00:08:12.907 size: 1.000366 MiB name: RG_ring_1_1688463 00:08:12.907 size: 1.000366 MiB name: RG_ring_4_1688463 00:08:12.907 size: 1.000366 MiB name: RG_ring_5_1688463 00:08:12.907 size: 0.125366 MiB name: RG_ring_2_1688463 00:08:12.907 size: 0.015991 MiB name: RG_ring_3_1688463 00:08:12.907 end memzones------- 00:08:12.907 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:13.167 heap id: 0 total size: 860.000000 MiB number of busy elements: 44 number of free elements: 16 00:08:13.167 list of free elements. size: 13.984680 MiB 00:08:13.167 element at address: 0x200000400000 with size: 1.999512 MiB 00:08:13.167 element at address: 0x200000800000 with size: 1.996948 MiB 00:08:13.167 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:08:13.167 element at address: 0x20001be00000 with size: 0.999878 MiB 00:08:13.167 element at address: 0x200034a00000 with size: 0.994446 MiB 00:08:13.167 element at address: 0x200009600000 with size: 0.959839 MiB 00:08:13.167 element at address: 0x200015e00000 with size: 0.954285 MiB 00:08:13.167 element at address: 0x20001c000000 with size: 0.936584 MiB 00:08:13.167 element at address: 0x200000200000 with size: 0.841614 MiB 00:08:13.167 element at address: 0x20001d800000 with size: 0.582886 MiB 00:08:13.167 element at address: 0x200003e00000 with size: 0.495422 MiB 00:08:13.167 element at address: 0x20000d800000 with size: 0.490723 MiB 00:08:13.167 element at address: 0x20001c200000 with size: 0.485657 MiB 00:08:13.167 element at address: 0x200007000000 with size: 0.481934 MiB 00:08:13.167 element at address: 0x20002ac00000 with size: 0.410034 MiB 00:08:13.167 element at address: 0x200003a00000 with size: 0.355042 MiB 00:08:13.167 list of standard malloc elements. size: 199.218628 MiB 00:08:13.167 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:08:13.167 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:08:13.167 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:08:13.167 element at address: 0x20001befff80 with size: 1.000122 MiB 00:08:13.167 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:08:13.167 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:13.167 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:08:13.167 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:13.167 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:08:13.167 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:13.167 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:13.167 element at address: 0x200003a5ae40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003a5f300 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003a7f680 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003aff940 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003affb40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003eff000 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20000707b600 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:08:13.168 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:08:13.168 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:08:13.168 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20001d895380 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20001d895440 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20002ac68f80 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20002ac69040 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20002ac6fc40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:08:13.168 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:08:13.168 list of memzone associated elements. size: 646.796692 MiB 00:08:13.168 element at address: 0x20001d895500 with size: 211.416748 MiB 00:08:13.168 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:13.168 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:08:13.168 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:13.168 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:08:13.168 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1688463_0 00:08:13.168 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:08:13.168 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1688463_0 00:08:13.168 element at address: 0x200003fff380 with size: 48.003052 MiB 00:08:13.168 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1688463_0 00:08:13.168 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:08:13.168 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1688463_0 00:08:13.168 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:08:13.168 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:13.168 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:08:13.168 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:13.168 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:08:13.168 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1688463 00:08:13.168 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:08:13.168 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1688463 00:08:13.168 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:13.168 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1688463 00:08:13.168 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:08:13.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:13.168 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:08:13.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:13.168 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:08:13.168 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:13.168 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:08:13.168 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:13.168 element at address: 0x200003eff180 with size: 1.000488 MiB 00:08:13.168 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1688463 00:08:13.168 element at address: 0x200003affc00 with size: 1.000488 MiB 00:08:13.168 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1688463 00:08:13.168 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:08:13.168 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1688463 00:08:13.168 element at address: 0x200034afe940 with size: 1.000488 MiB 00:08:13.168 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1688463 00:08:13.168 element at address: 0x200003a7f740 with size: 0.500488 MiB 00:08:13.168 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1688463 00:08:13.168 element at address: 0x200003e7ee00 with size: 0.500488 MiB 00:08:13.168 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1688463 00:08:13.168 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:08:13.168 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:13.168 element at address: 0x20000707b780 with size: 0.500488 MiB 00:08:13.168 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:13.168 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:08:13.168 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:13.168 element at address: 0x200003a5f3c0 with size: 0.125488 MiB 00:08:13.168 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1688463 00:08:13.168 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:08:13.168 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:13.168 element at address: 0x20002ac69100 with size: 0.023743 MiB 00:08:13.168 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:13.168 element at address: 0x200003a5b100 with size: 0.016113 MiB 00:08:13.168 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1688463 00:08:13.168 element at address: 0x20002ac6f240 with size: 0.002441 MiB 00:08:13.168 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:13.168 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:08:13.168 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1688463 00:08:13.168 element at address: 0x200003affa00 with size: 0.000305 MiB 00:08:13.168 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1688463 00:08:13.168 element at address: 0x200003a5af00 with size: 0.000305 MiB 00:08:13.168 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1688463 00:08:13.168 element at address: 0x20002ac6fd00 with size: 0.000305 MiB 00:08:13.168 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:13.168 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:13.168 13:18:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1688463 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 1688463 ']' 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 1688463 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1688463 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1688463' 00:08:13.168 killing process with pid 1688463 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 1688463 00:08:13.168 13:18:54 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 1688463 00:08:13.427 00:08:13.427 real 0m1.248s 00:08:13.427 user 0m1.209s 00:08:13.427 sys 0m0.453s 00:08:13.427 13:18:55 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:13.427 13:18:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:13.427 ************************************ 00:08:13.427 END TEST dpdk_mem_utility 00:08:13.427 ************************************ 00:08:13.687 13:18:55 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:13.687 13:18:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:13.687 13:18:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.687 13:18:55 -- common/autotest_common.sh@10 -- # set +x 00:08:13.687 ************************************ 00:08:13.687 START TEST event 00:08:13.687 ************************************ 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:13.687 * Looking for test storage... 00:08:13.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1681 -- # lcov --version 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:13.687 13:18:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:13.687 13:18:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:13.687 13:18:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:13.687 13:18:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:13.687 13:18:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:13.687 13:18:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:13.687 13:18:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:13.687 13:18:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:13.687 13:18:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:13.687 13:18:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:13.687 13:18:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:13.687 13:18:55 event -- scripts/common.sh@344 -- # case "$op" in 00:08:13.687 13:18:55 event -- scripts/common.sh@345 -- # : 1 00:08:13.687 13:18:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:13.687 13:18:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:13.687 13:18:55 event -- scripts/common.sh@365 -- # decimal 1 00:08:13.687 13:18:55 event -- scripts/common.sh@353 -- # local d=1 00:08:13.687 13:18:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:13.687 13:18:55 event -- scripts/common.sh@355 -- # echo 1 00:08:13.687 13:18:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:13.687 13:18:55 event -- scripts/common.sh@366 -- # decimal 2 00:08:13.687 13:18:55 event -- scripts/common.sh@353 -- # local d=2 00:08:13.687 13:18:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:13.687 13:18:55 event -- scripts/common.sh@355 -- # echo 2 00:08:13.687 13:18:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:13.687 13:18:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:13.687 13:18:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:13.687 13:18:55 event -- scripts/common.sh@368 -- # return 0 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.687 --rc genhtml_branch_coverage=1 00:08:13.687 --rc genhtml_function_coverage=1 00:08:13.687 --rc genhtml_legend=1 00:08:13.687 --rc geninfo_all_blocks=1 00:08:13.687 --rc geninfo_unexecuted_blocks=1 00:08:13.687 00:08:13.687 ' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.687 --rc genhtml_branch_coverage=1 00:08:13.687 --rc genhtml_function_coverage=1 00:08:13.687 --rc genhtml_legend=1 00:08:13.687 --rc geninfo_all_blocks=1 00:08:13.687 --rc geninfo_unexecuted_blocks=1 00:08:13.687 00:08:13.687 ' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.687 --rc genhtml_branch_coverage=1 00:08:13.687 --rc genhtml_function_coverage=1 00:08:13.687 --rc genhtml_legend=1 00:08:13.687 --rc geninfo_all_blocks=1 00:08:13.687 --rc geninfo_unexecuted_blocks=1 00:08:13.687 00:08:13.687 ' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:13.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:13.687 --rc genhtml_branch_coverage=1 00:08:13.687 --rc genhtml_function_coverage=1 00:08:13.687 --rc genhtml_legend=1 00:08:13.687 --rc geninfo_all_blocks=1 00:08:13.687 --rc geninfo_unexecuted_blocks=1 00:08:13.687 00:08:13.687 ' 00:08:13.687 13:18:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:13.687 13:18:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:13.687 13:18:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:13.687 13:18:55 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:13.687 13:18:55 event -- common/autotest_common.sh@10 -- # set +x 00:08:13.687 ************************************ 00:08:13.687 START TEST event_perf 00:08:13.687 ************************************ 00:08:13.687 13:18:55 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:13.687 Running I/O for 1 seconds...[2024-10-07 13:18:55.366449] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:13.687 [2024-10-07 13:18:55.366511] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688769 ] 00:08:13.947 [2024-10-07 13:18:55.424802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:13.947 [2024-10-07 13:18:55.535799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.947 [2024-10-07 13:18:55.535861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:13.947 [2024-10-07 13:18:55.535928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:13.947 [2024-10-07 13:18:55.535932] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.329 Running I/O for 1 seconds... 00:08:15.329 lcore 0: 235326 00:08:15.329 lcore 1: 235327 00:08:15.329 lcore 2: 235326 00:08:15.329 lcore 3: 235326 00:08:15.329 done. 00:08:15.329 00:08:15.329 real 0m1.297s 00:08:15.329 user 0m4.206s 00:08:15.329 sys 0m0.085s 00:08:15.329 13:18:56 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.329 13:18:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:15.329 ************************************ 00:08:15.329 END TEST event_perf 00:08:15.329 ************************************ 00:08:15.329 13:18:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:15.329 13:18:56 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:15.329 13:18:56 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.329 13:18:56 event -- common/autotest_common.sh@10 -- # set +x 00:08:15.329 ************************************ 00:08:15.329 START TEST event_reactor 00:08:15.329 ************************************ 00:08:15.329 13:18:56 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:15.329 [2024-10-07 13:18:56.715602] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:15.329 [2024-10-07 13:18:56.715696] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1688924 ] 00:08:15.329 [2024-10-07 13:18:56.770439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.329 [2024-10-07 13:18:56.871714] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.266 test_start 00:08:16.266 oneshot 00:08:16.266 tick 100 00:08:16.266 tick 100 00:08:16.266 tick 250 00:08:16.266 tick 100 00:08:16.266 tick 100 00:08:16.266 tick 100 00:08:16.266 tick 250 00:08:16.266 tick 500 00:08:16.266 tick 100 00:08:16.266 tick 100 00:08:16.266 tick 250 00:08:16.266 tick 100 00:08:16.266 tick 100 00:08:16.266 test_end 00:08:16.526 00:08:16.526 real 0m1.281s 00:08:16.526 user 0m1.201s 00:08:16.526 sys 0m0.075s 00:08:16.526 13:18:57 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.526 13:18:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:16.526 ************************************ 00:08:16.526 END TEST event_reactor 00:08:16.526 ************************************ 00:08:16.526 13:18:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:16.526 13:18:58 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:16.526 13:18:58 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.526 13:18:58 event -- common/autotest_common.sh@10 -- # set +x 00:08:16.526 ************************************ 00:08:16.526 START TEST event_reactor_perf 00:08:16.526 ************************************ 00:08:16.526 13:18:58 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:16.526 [2024-10-07 13:18:58.047477] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:16.527 [2024-10-07 13:18:58.047537] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689070 ] 00:08:16.527 [2024-10-07 13:18:58.103340] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.527 [2024-10-07 13:18:58.208538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.905 test_start 00:08:17.905 test_end 00:08:17.905 Performance: 446768 events per second 00:08:17.905 00:08:17.905 real 0m1.285s 00:08:17.905 user 0m1.214s 00:08:17.905 sys 0m0.067s 00:08:17.905 13:18:59 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.905 13:18:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:17.905 ************************************ 00:08:17.905 END TEST event_reactor_perf 00:08:17.905 ************************************ 00:08:17.905 13:18:59 event -- event/event.sh@49 -- # uname -s 00:08:17.905 13:18:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:17.905 13:18:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:17.905 13:18:59 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:17.905 13:18:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.905 13:18:59 event -- common/autotest_common.sh@10 -- # set +x 00:08:17.905 ************************************ 00:08:17.905 START TEST event_scheduler 00:08:17.905 ************************************ 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:17.905 * Looking for test storage... 00:08:17.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:17.905 13:18:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:17.905 13:18:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:17.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.905 --rc genhtml_branch_coverage=1 00:08:17.905 --rc genhtml_function_coverage=1 00:08:17.906 --rc genhtml_legend=1 00:08:17.906 --rc geninfo_all_blocks=1 00:08:17.906 --rc geninfo_unexecuted_blocks=1 00:08:17.906 00:08:17.906 ' 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:17.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.906 --rc genhtml_branch_coverage=1 00:08:17.906 --rc genhtml_function_coverage=1 00:08:17.906 --rc genhtml_legend=1 00:08:17.906 --rc geninfo_all_blocks=1 00:08:17.906 --rc geninfo_unexecuted_blocks=1 00:08:17.906 00:08:17.906 ' 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:17.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.906 --rc genhtml_branch_coverage=1 00:08:17.906 --rc genhtml_function_coverage=1 00:08:17.906 --rc genhtml_legend=1 00:08:17.906 --rc geninfo_all_blocks=1 00:08:17.906 --rc geninfo_unexecuted_blocks=1 00:08:17.906 00:08:17.906 ' 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:17.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:17.906 --rc genhtml_branch_coverage=1 00:08:17.906 --rc genhtml_function_coverage=1 00:08:17.906 --rc genhtml_legend=1 00:08:17.906 --rc geninfo_all_blocks=1 00:08:17.906 --rc geninfo_unexecuted_blocks=1 00:08:17.906 00:08:17.906 ' 00:08:17.906 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:17.906 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1689257 00:08:17.906 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:17.906 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:17.906 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1689257 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 1689257 ']' 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:17.906 13:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:17.906 [2024-10-07 13:18:59.566840] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:17.906 [2024-10-07 13:18:59.566920] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689257 ] 00:08:18.166 [2024-10-07 13:18:59.628225] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.166 [2024-10-07 13:18:59.745538] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.166 [2024-10-07 13:18:59.745691] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.166 [2024-10-07 13:18:59.745775] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.166 [2024-10-07 13:18:59.745779] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:08:18.166 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:18.166 [2024-10-07 13:18:59.806508] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:18.166 [2024-10-07 13:18:59.806536] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:18.166 [2024-10-07 13:18:59.806553] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:18.166 [2024-10-07 13:18:59.806563] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:18.166 [2024-10-07 13:18:59.806572] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.166 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.166 13:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 [2024-10-07 13:18:59.905786] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:18.428 13:18:59 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:18.428 13:18:59 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:18.428 13:18:59 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 ************************************ 00:08:18.428 START TEST scheduler_create_thread 00:08:18.428 ************************************ 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 2 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 3 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 4 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 5 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 6 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 7 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 8 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:18:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 9 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 10 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.428 13:19:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:19.803 13:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:19.803 13:19:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:19.803 13:19:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:19.803 13:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:19.803 13:19:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.183 13:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.183 00:08:21.183 real 0m2.617s 00:08:21.183 user 0m0.010s 00:08:21.183 sys 0m0.006s 00:08:21.183 13:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.183 13:19:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:21.183 ************************************ 00:08:21.183 END TEST scheduler_create_thread 00:08:21.183 ************************************ 00:08:21.183 13:19:02 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:21.183 13:19:02 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1689257 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 1689257 ']' 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 1689257 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689257 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689257' 00:08:21.183 killing process with pid 1689257 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 1689257 00:08:21.183 13:19:02 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 1689257 00:08:21.443 [2024-10-07 13:19:03.032901] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:21.700 00:08:21.700 real 0m3.935s 00:08:21.700 user 0m5.856s 00:08:21.700 sys 0m0.374s 00:08:21.700 13:19:03 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.700 13:19:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:21.700 ************************************ 00:08:21.700 END TEST event_scheduler 00:08:21.700 ************************************ 00:08:21.700 13:19:03 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:21.700 13:19:03 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:21.700 13:19:03 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:21.700 13:19:03 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:21.700 13:19:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:21.700 ************************************ 00:08:21.700 START TEST app_repeat 00:08:21.700 ************************************ 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1689806 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1689806' 00:08:21.700 Process app_repeat pid: 1689806 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:21.700 spdk_app_start Round 0 00:08:21.700 13:19:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1689806 /var/tmp/spdk-nbd.sock 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1689806 ']' 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:21.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.700 13:19:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:21.700 [2024-10-07 13:19:03.396724] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:21.700 [2024-10-07 13:19:03.396788] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1689806 ] 00:08:21.956 [2024-10-07 13:19:03.449947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:21.956 [2024-10-07 13:19:03.549788] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:21.956 [2024-10-07 13:19:03.549793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.956 13:19:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.956 13:19:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:21.956 13:19:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.214 Malloc0 00:08:22.472 13:19:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:22.729 Malloc1 00:08:22.729 13:19:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.729 13:19:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:22.987 /dev/nbd0 00:08:22.987 13:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:22.987 13:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:22.987 1+0 records in 00:08:22.987 1+0 records out 00:08:22.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214123 s, 19.1 MB/s 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:22.987 13:19:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:22.987 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:22.987 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:22.987 13:19:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:23.245 /dev/nbd1 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:23.245 1+0 records in 00:08:23.245 1+0 records out 00:08:23.245 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242337 s, 16.9 MB/s 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:23.245 13:19:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.245 13:19:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.504 { 00:08:23.504 "nbd_device": "/dev/nbd0", 00:08:23.504 "bdev_name": "Malloc0" 00:08:23.504 }, 00:08:23.504 { 00:08:23.504 "nbd_device": "/dev/nbd1", 00:08:23.504 "bdev_name": "Malloc1" 00:08:23.504 } 00:08:23.504 ]' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.504 { 00:08:23.504 "nbd_device": "/dev/nbd0", 00:08:23.504 "bdev_name": "Malloc0" 00:08:23.504 }, 00:08:23.504 { 00:08:23.504 "nbd_device": "/dev/nbd1", 00:08:23.504 "bdev_name": "Malloc1" 00:08:23.504 } 00:08:23.504 ]' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:23.504 /dev/nbd1' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:23.504 /dev/nbd1' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:23.504 256+0 records in 00:08:23.504 256+0 records out 00:08:23.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504168 s, 208 MB/s 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.504 13:19:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:23.797 256+0 records in 00:08:23.797 256+0 records out 00:08:23.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220866 s, 47.5 MB/s 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:23.797 256+0 records in 00:08:23.797 256+0 records out 00:08:23.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221809 s, 47.3 MB/s 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.797 13:19:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.085 13:19:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.342 13:19:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:24.601 13:19:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:24.601 13:19:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:24.860 13:19:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:25.120 [2024-10-07 13:19:06.750568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.380 [2024-10-07 13:19:06.851821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.380 [2024-10-07 13:19:06.851821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.380 [2024-10-07 13:19:06.908476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:25.380 [2024-10-07 13:19:06.908557] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:27.908 13:19:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:27.908 13:19:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:27.909 spdk_app_start Round 1 00:08:27.909 13:19:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1689806 /var/tmp/spdk-nbd.sock 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1689806 ']' 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:27.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:27.909 13:19:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:28.166 13:19:09 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.166 13:19:09 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:28.166 13:19:09 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.423 Malloc0 00:08:28.423 13:19:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:28.679 Malloc1 00:08:28.680 13:19:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:28.680 13:19:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:28.949 /dev/nbd0 00:08:29.212 13:19:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:29.212 13:19:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.212 1+0 records in 00:08:29.212 1+0 records out 00:08:29.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000172193 s, 23.8 MB/s 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:29.212 13:19:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:29.212 13:19:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.212 13:19:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.212 13:19:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:29.470 /dev/nbd1 00:08:29.470 13:19:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:29.470 13:19:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:29.470 13:19:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:29.470 1+0 records in 00:08:29.470 1+0 records out 00:08:29.470 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236151 s, 17.3 MB/s 00:08:29.470 13:19:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.470 13:19:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:29.470 13:19:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:29.470 13:19:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:29.470 13:19:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:29.470 13:19:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:29.470 13:19:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:29.470 13:19:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:29.470 13:19:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.470 13:19:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:29.728 { 00:08:29.728 "nbd_device": "/dev/nbd0", 00:08:29.728 "bdev_name": "Malloc0" 00:08:29.728 }, 00:08:29.728 { 00:08:29.728 "nbd_device": "/dev/nbd1", 00:08:29.728 "bdev_name": "Malloc1" 00:08:29.728 } 00:08:29.728 ]' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:29.728 { 00:08:29.728 "nbd_device": "/dev/nbd0", 00:08:29.728 "bdev_name": "Malloc0" 00:08:29.728 }, 00:08:29.728 { 00:08:29.728 "nbd_device": "/dev/nbd1", 00:08:29.728 "bdev_name": "Malloc1" 00:08:29.728 } 00:08:29.728 ]' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:29.728 /dev/nbd1' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:29.728 /dev/nbd1' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:29.728 256+0 records in 00:08:29.728 256+0 records out 00:08:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511134 s, 205 MB/s 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:29.728 256+0 records in 00:08:29.728 256+0 records out 00:08:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0199127 s, 52.7 MB/s 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:29.728 256+0 records in 00:08:29.728 256+0 records out 00:08:29.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222353 s, 47.2 MB/s 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:29.728 13:19:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.729 13:19:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.986 13:19:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.265 13:19:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:30.524 13:19:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:30.524 13:19:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.524 13:19:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.524 13:19:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.524 13:19:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:30.524 13:19:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:30.782 13:19:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:30.782 13:19:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:31.040 13:19:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:31.299 [2024-10-07 13:19:12.826153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:31.299 [2024-10-07 13:19:12.935197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.299 [2024-10-07 13:19:12.935197] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.299 [2024-10-07 13:19:12.990025] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:31.299 [2024-10-07 13:19:12.990094] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:34.592 13:19:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:34.592 13:19:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:34.592 spdk_app_start Round 2 00:08:34.592 13:19:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1689806 /var/tmp/spdk-nbd.sock 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1689806 ']' 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:34.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:34.592 13:19:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:34.592 13:19:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.592 Malloc0 00:08:34.592 13:19:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.849 Malloc1 00:08:34.849 13:19:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:34.849 13:19:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:34.850 13:19:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:35.107 /dev/nbd0 00:08:35.107 13:19:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.107 13:19:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.107 1+0 records in 00:08:35.107 1+0 records out 00:08:35.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027776 s, 14.7 MB/s 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.107 13:19:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:35.107 13:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.108 13:19:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.108 13:19:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.365 /dev/nbd1 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.365 1+0 records in 00:08:35.365 1+0 records out 00:08:35.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000150084 s, 27.3 MB/s 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:08:35.365 13:19:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.365 13:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:35.622 13:19:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:35.622 { 00:08:35.622 "nbd_device": "/dev/nbd0", 00:08:35.622 "bdev_name": "Malloc0" 00:08:35.622 }, 00:08:35.622 { 00:08:35.622 "nbd_device": "/dev/nbd1", 00:08:35.622 "bdev_name": "Malloc1" 00:08:35.622 } 00:08:35.622 ]' 00:08:35.622 13:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:35.622 { 00:08:35.622 "nbd_device": "/dev/nbd0", 00:08:35.622 "bdev_name": "Malloc0" 00:08:35.622 }, 00:08:35.622 { 00:08:35.622 "nbd_device": "/dev/nbd1", 00:08:35.622 "bdev_name": "Malloc1" 00:08:35.623 } 00:08:35.623 ]' 00:08:35.623 13:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:35.880 /dev/nbd1' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:35.880 /dev/nbd1' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:35.880 256+0 records in 00:08:35.880 256+0 records out 00:08:35.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490935 s, 214 MB/s 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:35.880 256+0 records in 00:08:35.880 256+0 records out 00:08:35.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208479 s, 50.3 MB/s 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:35.880 256+0 records in 00:08:35.880 256+0 records out 00:08:35.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220875 s, 47.5 MB/s 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:35.880 13:19:17 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.881 13:19:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.881 13:19:17 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:35.881 13:19:17 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:35.881 13:19:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:35.881 13:19:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.138 13:19:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.396 13:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.654 13:19:18 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.654 13:19:18 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:37.221 13:19:18 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:37.221 [2024-10-07 13:19:18.888660] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.481 [2024-10-07 13:19:18.990243] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.481 [2024-10-07 13:19:18.990248] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.481 [2024-10-07 13:19:19.047531] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:37.481 [2024-10-07 13:19:19.047612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.021 13:19:21 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1689806 /var/tmp/spdk-nbd.sock 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 1689806 ']' 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.021 13:19:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:08:40.280 13:19:21 event.app_repeat -- event/event.sh@39 -- # killprocess 1689806 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 1689806 ']' 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 1689806 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1689806 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1689806' 00:08:40.280 killing process with pid 1689806 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@969 -- # kill 1689806 00:08:40.280 13:19:21 event.app_repeat -- common/autotest_common.sh@974 -- # wait 1689806 00:08:40.540 spdk_app_start is called in Round 0. 00:08:40.540 Shutdown signal received, stop current app iteration 00:08:40.540 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:08:40.540 spdk_app_start is called in Round 1. 00:08:40.540 Shutdown signal received, stop current app iteration 00:08:40.540 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:08:40.540 spdk_app_start is called in Round 2. 00:08:40.540 Shutdown signal received, stop current app iteration 00:08:40.540 Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 reinitialization... 00:08:40.540 spdk_app_start is called in Round 3. 00:08:40.540 Shutdown signal received, stop current app iteration 00:08:40.540 13:19:22 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:40.540 13:19:22 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:40.540 00:08:40.540 real 0m18.831s 00:08:40.540 user 0m41.304s 00:08:40.540 sys 0m3.249s 00:08:40.540 13:19:22 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.540 13:19:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.540 ************************************ 00:08:40.540 END TEST app_repeat 00:08:40.540 ************************************ 00:08:40.540 13:19:22 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:40.540 13:19:22 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:40.540 13:19:22 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.540 13:19:22 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.540 13:19:22 event -- common/autotest_common.sh@10 -- # set +x 00:08:40.798 ************************************ 00:08:40.798 START TEST cpu_locks 00:08:40.798 ************************************ 00:08:40.798 13:19:22 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:40.798 * Looking for test storage... 00:08:40.798 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:40.798 13:19:22 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:40.799 13:19:22 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:40.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.799 --rc genhtml_branch_coverage=1 00:08:40.799 --rc genhtml_function_coverage=1 00:08:40.799 --rc genhtml_legend=1 00:08:40.799 --rc geninfo_all_blocks=1 00:08:40.799 --rc geninfo_unexecuted_blocks=1 00:08:40.799 00:08:40.799 ' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:40.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.799 --rc genhtml_branch_coverage=1 00:08:40.799 --rc genhtml_function_coverage=1 00:08:40.799 --rc genhtml_legend=1 00:08:40.799 --rc geninfo_all_blocks=1 00:08:40.799 --rc geninfo_unexecuted_blocks=1 00:08:40.799 00:08:40.799 ' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:40.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.799 --rc genhtml_branch_coverage=1 00:08:40.799 --rc genhtml_function_coverage=1 00:08:40.799 --rc genhtml_legend=1 00:08:40.799 --rc geninfo_all_blocks=1 00:08:40.799 --rc geninfo_unexecuted_blocks=1 00:08:40.799 00:08:40.799 ' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:40.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:40.799 --rc genhtml_branch_coverage=1 00:08:40.799 --rc genhtml_function_coverage=1 00:08:40.799 --rc genhtml_legend=1 00:08:40.799 --rc geninfo_all_blocks=1 00:08:40.799 --rc geninfo_unexecuted_blocks=1 00:08:40.799 00:08:40.799 ' 00:08:40.799 13:19:22 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:40.799 13:19:22 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:40.799 13:19:22 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:40.799 13:19:22 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.799 13:19:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.799 ************************************ 00:08:40.799 START TEST default_locks 00:08:40.799 ************************************ 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1692217 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1692217 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1692217 ']' 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.799 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:40.799 [2024-10-07 13:19:22.483756] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:40.799 [2024-10-07 13:19:22.483833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692217 ] 00:08:41.058 [2024-10-07 13:19:22.539762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.058 [2024-10-07 13:19:22.640754] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.317 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:41.317 13:19:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:08:41.317 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1692217 00:08:41.317 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1692217 00:08:41.317 13:19:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:41.576 lslocks: write error 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1692217 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 1692217 ']' 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 1692217 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692217 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692217' 00:08:41.576 killing process with pid 1692217 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 1692217 00:08:41.576 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 1692217 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1692217 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1692217 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1692217 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 1692217 ']' 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.145 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1692217) - No such process 00:08:42.145 ERROR: process (pid: 1692217) is no longer running 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:42.145 00:08:42.145 real 0m1.152s 00:08:42.145 user 0m1.126s 00:08:42.145 sys 0m0.485s 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.145 13:19:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.145 ************************************ 00:08:42.145 END TEST default_locks 00:08:42.145 ************************************ 00:08:42.145 13:19:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:42.145 13:19:23 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.145 13:19:23 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.145 13:19:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:42.145 ************************************ 00:08:42.145 START TEST default_locks_via_rpc 00:08:42.145 ************************************ 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1692373 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1692373 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1692373 ']' 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:42.145 13:19:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.145 [2024-10-07 13:19:23.692516] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:42.145 [2024-10-07 13:19:23.692598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692373 ] 00:08:42.145 [2024-10-07 13:19:23.746557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.146 [2024-10-07 13:19:23.844808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.405 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:42.663 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.663 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1692373 00:08:42.663 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1692373 00:08:42.663 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1692373 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 1692373 ']' 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 1692373 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692373 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692373' 00:08:42.923 killing process with pid 1692373 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 1692373 00:08:42.923 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 1692373 00:08:43.490 00:08:43.490 real 0m1.305s 00:08:43.490 user 0m1.268s 00:08:43.490 sys 0m0.505s 00:08:43.490 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.490 13:19:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:43.490 ************************************ 00:08:43.490 END TEST default_locks_via_rpc 00:08:43.490 ************************************ 00:08:43.490 13:19:24 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:43.490 13:19:24 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:43.490 13:19:24 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.490 13:19:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:43.490 ************************************ 00:08:43.490 START TEST non_locking_app_on_locked_coremask 00:08:43.490 ************************************ 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1692533 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1692533 /var/tmp/spdk.sock 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1692533 ']' 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:43.490 13:19:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:43.490 [2024-10-07 13:19:25.051335] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:43.490 [2024-10-07 13:19:25.051413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692533 ] 00:08:43.490 [2024-10-07 13:19:25.109012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.749 [2024-10-07 13:19:25.215491] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1692652 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1692652 /var/tmp/spdk2.sock 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1692652 ']' 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.007 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:44.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:44.008 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.008 13:19:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:44.008 [2024-10-07 13:19:25.531758] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:44.008 [2024-10-07 13:19:25.531853] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1692652 ] 00:08:44.008 [2024-10-07 13:19:25.612251] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:44.008 [2024-10-07 13:19:25.612287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.267 [2024-10-07 13:19:25.821224] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.834 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.834 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:44.834 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1692533 00:08:44.834 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1692533 00:08:44.834 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:45.402 lslocks: write error 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1692533 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1692533 ']' 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1692533 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692533 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692533' 00:08:45.402 killing process with pid 1692533 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1692533 00:08:45.402 13:19:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1692533 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1692652 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1692652 ']' 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1692652 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1692652 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1692652' 00:08:46.340 killing process with pid 1692652 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1692652 00:08:46.340 13:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1692652 00:08:46.908 00:08:46.908 real 0m3.378s 00:08:46.908 user 0m3.605s 00:08:46.908 sys 0m1.068s 00:08:46.908 13:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.908 13:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.908 ************************************ 00:08:46.908 END TEST non_locking_app_on_locked_coremask 00:08:46.908 ************************************ 00:08:46.908 13:19:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:46.908 13:19:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:46.908 13:19:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.908 13:19:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:46.908 ************************************ 00:08:46.908 START TEST locking_app_on_unlocked_coremask 00:08:46.908 ************************************ 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1693021 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1693021 /var/tmp/spdk.sock 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693021 ']' 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:46.908 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:46.908 [2024-10-07 13:19:28.478825] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:46.908 [2024-10-07 13:19:28.478915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693021 ] 00:08:46.908 [2024-10-07 13:19:28.533051] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:46.908 [2024-10-07 13:19:28.533077] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.167 [2024-10-07 13:19:28.633869] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1693069 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1693069 /var/tmp/spdk2.sock 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693069 ']' 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:47.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:47.453 13:19:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:47.453 [2024-10-07 13:19:28.936943] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:47.453 [2024-10-07 13:19:28.937041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693069 ] 00:08:47.453 [2024-10-07 13:19:29.015561] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.712 [2024-10-07 13:19:29.223508] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.278 13:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:48.278 13:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:48.279 13:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1693069 00:08:48.279 13:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1693069 00:08:48.279 13:19:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:48.845 lslocks: write error 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1693021 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1693021 ']' 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1693021 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693021 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693021' 00:08:48.845 killing process with pid 1693021 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1693021 00:08:48.845 13:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1693021 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1693069 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1693069 ']' 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 1693069 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693069 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693069' 00:08:49.785 killing process with pid 1693069 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 1693069 00:08:49.785 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 1693069 00:08:50.044 00:08:50.044 real 0m3.296s 00:08:50.044 user 0m3.547s 00:08:50.044 sys 0m0.996s 00:08:50.044 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.044 13:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.044 ************************************ 00:08:50.044 END TEST locking_app_on_unlocked_coremask 00:08:50.044 ************************************ 00:08:50.044 13:19:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:08:50.044 13:19:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:50.044 13:19:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.044 13:19:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:50.337 ************************************ 00:08:50.337 START TEST locking_app_on_locked_coremask 00:08:50.337 ************************************ 00:08:50.337 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:08:50.337 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1693439 00:08:50.337 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:50.337 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1693439 /var/tmp/spdk.sock 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693439 ']' 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.338 13:19:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.338 [2024-10-07 13:19:31.825390] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:50.338 [2024-10-07 13:19:31.825499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693439 ] 00:08:50.338 [2024-10-07 13:19:31.883878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.338 [2024-10-07 13:19:31.994544] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1693494 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1693494 /var/tmp/spdk2.sock 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1693494 /var/tmp/spdk2.sock 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1693494 /var/tmp/spdk2.sock 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693494 ']' 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:50.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.620 13:19:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:50.620 [2024-10-07 13:19:32.309827] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:50.620 [2024-10-07 13:19:32.309918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693494 ] 00:08:50.880 [2024-10-07 13:19:32.389358] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1693439 has claimed it. 00:08:50.880 [2024-10-07 13:19:32.389414] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:51.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1693494) - No such process 00:08:51.447 ERROR: process (pid: 1693494) is no longer running 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1693439 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1693439 00:08:51.447 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:52.014 lslocks: write error 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1693439 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 1693439 ']' 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 1693439 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693439 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693439' 00:08:52.014 killing process with pid 1693439 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 1693439 00:08:52.014 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 1693439 00:08:52.273 00:08:52.273 real 0m2.145s 00:08:52.273 user 0m2.377s 00:08:52.273 sys 0m0.646s 00:08:52.273 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.273 13:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.273 ************************************ 00:08:52.273 END TEST locking_app_on_locked_coremask 00:08:52.273 ************************************ 00:08:52.273 13:19:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:08:52.273 13:19:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:52.273 13:19:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.273 13:19:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:52.273 ************************************ 00:08:52.273 START TEST locking_overlapped_coremask 00:08:52.273 ************************************ 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1693686 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1693686 /var/tmp/spdk.sock 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693686 ']' 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.273 13:19:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:52.532 [2024-10-07 13:19:34.024842] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:52.532 [2024-10-07 13:19:34.024925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693686 ] 00:08:52.532 [2024-10-07 13:19:34.093848] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:52.532 [2024-10-07 13:19:34.206146] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.532 [2024-10-07 13:19:34.206210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.532 [2024-10-07 13:19:34.206214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1693782 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1693782 /var/tmp/spdk2.sock 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1693782 /var/tmp/spdk2.sock 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1693782 /var/tmp/spdk2.sock 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 1693782 ']' 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:52.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:52.790 13:19:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:53.049 [2024-10-07 13:19:34.546293] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:53.049 [2024-10-07 13:19:34.546373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693782 ] 00:08:53.049 [2024-10-07 13:19:34.628343] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1693686 has claimed it. 00:08:53.049 [2024-10-07 13:19:34.628415] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:08:53.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (1693782) - No such process 00:08:53.618 ERROR: process (pid: 1693782) is no longer running 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1693686 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 1693686 ']' 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 1693686 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693686 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693686' 00:08:53.618 killing process with pid 1693686 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 1693686 00:08:53.618 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 1693686 00:08:54.187 00:08:54.187 real 0m1.798s 00:08:54.187 user 0m4.795s 00:08:54.187 sys 0m0.480s 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:54.187 ************************************ 00:08:54.187 END TEST locking_overlapped_coremask 00:08:54.187 ************************************ 00:08:54.187 13:19:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:08:54.187 13:19:35 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:54.187 13:19:35 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:54.187 13:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.187 ************************************ 00:08:54.187 START TEST locking_overlapped_coremask_via_rpc 00:08:54.187 ************************************ 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1693940 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1693940 /var/tmp/spdk.sock 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1693940 ']' 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.187 13:19:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.187 [2024-10-07 13:19:35.873246] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:54.187 [2024-10-07 13:19:35.873343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1693940 ] 00:08:54.447 [2024-10-07 13:19:35.930341] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:54.447 [2024-10-07 13:19:35.930387] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.447 [2024-10-07 13:19:36.042799] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:08:54.447 [2024-10-07 13:19:36.042858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:54.447 [2024-10-07 13:19:36.042863] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1694067 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1694067 /var/tmp/spdk2.sock 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1694067 ']' 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:54.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:54.705 13:19:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.705 [2024-10-07 13:19:36.359198] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:54.705 [2024-10-07 13:19:36.359281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694067 ] 00:08:54.964 [2024-10-07 13:19:36.440241] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:54.964 [2024-10-07 13:19:36.440274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:54.964 [2024-10-07 13:19:36.661370] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:08:54.964 [2024-10-07 13:19:36.664725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:08:54.964 [2024-10-07 13:19:36.664728] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.898 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.898 [2024-10-07 13:19:37.361769] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1693940 has claimed it. 00:08:55.898 request: 00:08:55.898 { 00:08:55.898 "method": "framework_enable_cpumask_locks", 00:08:55.898 "req_id": 1 00:08:55.899 } 00:08:55.899 Got JSON-RPC error response 00:08:55.899 response: 00:08:55.899 { 00:08:55.899 "code": -32603, 00:08:55.899 "message": "Failed to claim CPU core: 2" 00:08:55.899 } 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1693940 /var/tmp/spdk.sock 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1693940 ']' 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:55.899 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1694067 /var/tmp/spdk2.sock 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 1694067 ']' 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.156 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.157 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.157 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:08:56.414 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:08:56.414 00:08:56.414 real 0m2.084s 00:08:56.414 user 0m1.131s 00:08:56.414 sys 0m0.170s 00:08:56.415 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.415 13:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:56.415 ************************************ 00:08:56.415 END TEST locking_overlapped_coremask_via_rpc 00:08:56.415 ************************************ 00:08:56.415 13:19:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:08:56.415 13:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1693940 ]] 00:08:56.415 13:19:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1693940 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1693940 ']' 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1693940 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1693940 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1693940' 00:08:56.415 killing process with pid 1693940 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1693940 00:08:56.415 13:19:37 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1693940 00:08:56.983 13:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1694067 ]] 00:08:56.983 13:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1694067 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1694067 ']' 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1694067 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1694067 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1694067' 00:08:56.983 killing process with pid 1694067 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 1694067 00:08:56.983 13:19:38 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 1694067 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1693940 ]] 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1693940 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1693940 ']' 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1693940 00:08:57.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1693940) - No such process 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1693940 is not found' 00:08:57.242 Process with pid 1693940 is not found 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1694067 ]] 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1694067 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 1694067 ']' 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 1694067 00:08:57.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1694067) - No such process 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 1694067 is not found' 00:08:57.242 Process with pid 1694067 is not found 00:08:57.242 13:19:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:08:57.242 00:08:57.242 real 0m16.682s 00:08:57.242 user 0m29.585s 00:08:57.242 sys 0m5.278s 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.242 13:19:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:57.242 ************************************ 00:08:57.242 END TEST cpu_locks 00:08:57.242 ************************************ 00:08:57.501 00:08:57.501 real 0m43.778s 00:08:57.501 user 1m23.592s 00:08:57.501 sys 0m9.395s 00:08:57.501 13:19:38 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:57.501 13:19:38 event -- common/autotest_common.sh@10 -- # set +x 00:08:57.501 ************************************ 00:08:57.501 END TEST event 00:08:57.501 ************************************ 00:08:57.501 13:19:38 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:57.501 13:19:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:57.501 13:19:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.501 13:19:38 -- common/autotest_common.sh@10 -- # set +x 00:08:57.501 ************************************ 00:08:57.501 START TEST thread 00:08:57.501 ************************************ 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:08:57.501 * Looking for test storage... 00:08:57.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:57.501 13:19:39 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.501 13:19:39 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.501 13:19:39 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.501 13:19:39 thread -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.501 13:19:39 thread -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.501 13:19:39 thread -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.501 13:19:39 thread -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.501 13:19:39 thread -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.501 13:19:39 thread -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.501 13:19:39 thread -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.501 13:19:39 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.501 13:19:39 thread -- scripts/common.sh@344 -- # case "$op" in 00:08:57.501 13:19:39 thread -- scripts/common.sh@345 -- # : 1 00:08:57.501 13:19:39 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.501 13:19:39 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.501 13:19:39 thread -- scripts/common.sh@365 -- # decimal 1 00:08:57.501 13:19:39 thread -- scripts/common.sh@353 -- # local d=1 00:08:57.501 13:19:39 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.501 13:19:39 thread -- scripts/common.sh@355 -- # echo 1 00:08:57.501 13:19:39 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.501 13:19:39 thread -- scripts/common.sh@366 -- # decimal 2 00:08:57.501 13:19:39 thread -- scripts/common.sh@353 -- # local d=2 00:08:57.501 13:19:39 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.501 13:19:39 thread -- scripts/common.sh@355 -- # echo 2 00:08:57.501 13:19:39 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.501 13:19:39 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.501 13:19:39 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.501 13:19:39 thread -- scripts/common.sh@368 -- # return 0 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:57.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.501 --rc genhtml_branch_coverage=1 00:08:57.501 --rc genhtml_function_coverage=1 00:08:57.501 --rc genhtml_legend=1 00:08:57.501 --rc geninfo_all_blocks=1 00:08:57.501 --rc geninfo_unexecuted_blocks=1 00:08:57.501 00:08:57.501 ' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:57.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.501 --rc genhtml_branch_coverage=1 00:08:57.501 --rc genhtml_function_coverage=1 00:08:57.501 --rc genhtml_legend=1 00:08:57.501 --rc geninfo_all_blocks=1 00:08:57.501 --rc geninfo_unexecuted_blocks=1 00:08:57.501 00:08:57.501 ' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:57.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.501 --rc genhtml_branch_coverage=1 00:08:57.501 --rc genhtml_function_coverage=1 00:08:57.501 --rc genhtml_legend=1 00:08:57.501 --rc geninfo_all_blocks=1 00:08:57.501 --rc geninfo_unexecuted_blocks=1 00:08:57.501 00:08:57.501 ' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:57.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.501 --rc genhtml_branch_coverage=1 00:08:57.501 --rc genhtml_function_coverage=1 00:08:57.501 --rc genhtml_legend=1 00:08:57.501 --rc geninfo_all_blocks=1 00:08:57.501 --rc geninfo_unexecuted_blocks=1 00:08:57.501 00:08:57.501 ' 00:08:57.501 13:19:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:57.501 13:19:39 thread -- common/autotest_common.sh@10 -- # set +x 00:08:57.501 ************************************ 00:08:57.501 START TEST thread_poller_perf 00:08:57.501 ************************************ 00:08:57.501 13:19:39 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:08:57.501 [2024-10-07 13:19:39.200742] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:57.501 [2024-10-07 13:19:39.200801] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694433 ] 00:08:57.760 [2024-10-07 13:19:39.256392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.760 [2024-10-07 13:19:39.364150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.760 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:59.141 [2024-10-07T11:19:40.853Z] ====================================== 00:08:59.141 [2024-10-07T11:19:40.853Z] busy:2710435929 (cyc) 00:08:59.141 [2024-10-07T11:19:40.853Z] total_run_count: 355000 00:08:59.141 [2024-10-07T11:19:40.853Z] tsc_hz: 2700000000 (cyc) 00:08:59.141 [2024-10-07T11:19:40.853Z] ====================================== 00:08:59.141 [2024-10-07T11:19:40.853Z] poller_cost: 7635 (cyc), 2827 (nsec) 00:08:59.141 00:08:59.141 real 0m1.298s 00:08:59.141 user 0m1.218s 00:08:59.141 sys 0m0.074s 00:08:59.141 13:19:40 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:59.141 13:19:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:59.141 ************************************ 00:08:59.141 END TEST thread_poller_perf 00:08:59.141 ************************************ 00:08:59.141 13:19:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.141 13:19:40 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:08:59.141 13:19:40 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:59.141 13:19:40 thread -- common/autotest_common.sh@10 -- # set +x 00:08:59.141 ************************************ 00:08:59.141 START TEST thread_poller_perf 00:08:59.141 ************************************ 00:08:59.141 13:19:40 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:59.141 [2024-10-07 13:19:40.549454] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:08:59.141 [2024-10-07 13:19:40.549519] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694586 ] 00:08:59.141 [2024-10-07 13:19:40.606790] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.141 [2024-10-07 13:19:40.715139] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.141 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:00.520 [2024-10-07T11:19:42.232Z] ====================================== 00:09:00.520 [2024-10-07T11:19:42.232Z] busy:2702213556 (cyc) 00:09:00.520 [2024-10-07T11:19:42.232Z] total_run_count: 4882000 00:09:00.520 [2024-10-07T11:19:42.232Z] tsc_hz: 2700000000 (cyc) 00:09:00.520 [2024-10-07T11:19:42.232Z] ====================================== 00:09:00.520 [2024-10-07T11:19:42.232Z] poller_cost: 553 (cyc), 204 (nsec) 00:09:00.520 00:09:00.520 real 0m1.289s 00:09:00.520 user 0m1.210s 00:09:00.520 sys 0m0.073s 00:09:00.520 13:19:41 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.520 13:19:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:00.520 ************************************ 00:09:00.520 END TEST thread_poller_perf 00:09:00.520 ************************************ 00:09:00.520 13:19:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:00.520 00:09:00.520 real 0m2.831s 00:09:00.520 user 0m2.571s 00:09:00.521 sys 0m0.261s 00:09:00.521 13:19:41 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:00.521 13:19:41 thread -- common/autotest_common.sh@10 -- # set +x 00:09:00.521 ************************************ 00:09:00.521 END TEST thread 00:09:00.521 ************************************ 00:09:00.521 13:19:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:00.521 13:19:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.521 13:19:41 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:00.521 13:19:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:00.521 13:19:41 -- common/autotest_common.sh@10 -- # set +x 00:09:00.521 ************************************ 00:09:00.521 START TEST app_cmdline 00:09:00.521 ************************************ 00:09:00.521 13:19:41 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:00.521 * Looking for test storage... 00:09:00.521 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:00.521 13:19:41 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:00.521 13:19:41 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:09:00.521 13:19:41 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.521 13:19:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:00.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.521 --rc genhtml_branch_coverage=1 00:09:00.521 --rc genhtml_function_coverage=1 00:09:00.521 --rc genhtml_legend=1 00:09:00.521 --rc geninfo_all_blocks=1 00:09:00.521 --rc geninfo_unexecuted_blocks=1 00:09:00.521 00:09:00.521 ' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:00.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.521 --rc genhtml_branch_coverage=1 00:09:00.521 --rc genhtml_function_coverage=1 00:09:00.521 --rc genhtml_legend=1 00:09:00.521 --rc geninfo_all_blocks=1 00:09:00.521 --rc geninfo_unexecuted_blocks=1 00:09:00.521 00:09:00.521 ' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:00.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.521 --rc genhtml_branch_coverage=1 00:09:00.521 --rc genhtml_function_coverage=1 00:09:00.521 --rc genhtml_legend=1 00:09:00.521 --rc geninfo_all_blocks=1 00:09:00.521 --rc geninfo_unexecuted_blocks=1 00:09:00.521 00:09:00.521 ' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:00.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.521 --rc genhtml_branch_coverage=1 00:09:00.521 --rc genhtml_function_coverage=1 00:09:00.521 --rc genhtml_legend=1 00:09:00.521 --rc geninfo_all_blocks=1 00:09:00.521 --rc geninfo_unexecuted_blocks=1 00:09:00.521 00:09:00.521 ' 00:09:00.521 13:19:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:00.521 13:19:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1694894 00:09:00.521 13:19:42 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:00.521 13:19:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1694894 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 1694894 ']' 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.521 13:19:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:00.521 [2024-10-07 13:19:42.066959] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:00.521 [2024-10-07 13:19:42.067076] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1694894 ] 00:09:00.521 [2024-10-07 13:19:42.122084] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.521 [2024-10-07 13:19:42.231727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.781 13:19:42 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.781 13:19:42 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:09:00.781 13:19:42 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:01.040 { 00:09:01.040 "version": "SPDK v25.01-pre git sha1 d16db39ee", 00:09:01.040 "fields": { 00:09:01.040 "major": 25, 00:09:01.040 "minor": 1, 00:09:01.040 "patch": 0, 00:09:01.040 "suffix": "-pre", 00:09:01.040 "commit": "d16db39ee" 00:09:01.040 } 00:09:01.040 } 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:01.298 13:19:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:01.298 13:19:42 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:01.557 request: 00:09:01.557 { 00:09:01.557 "method": "env_dpdk_get_mem_stats", 00:09:01.557 "req_id": 1 00:09:01.557 } 00:09:01.557 Got JSON-RPC error response 00:09:01.557 response: 00:09:01.557 { 00:09:01.557 "code": -32601, 00:09:01.557 "message": "Method not found" 00:09:01.557 } 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:01.557 13:19:43 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1694894 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 1694894 ']' 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 1694894 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1694894 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1694894' 00:09:01.557 killing process with pid 1694894 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@969 -- # kill 1694894 00:09:01.557 13:19:43 app_cmdline -- common/autotest_common.sh@974 -- # wait 1694894 00:09:02.126 00:09:02.126 real 0m1.689s 00:09:02.126 user 0m2.086s 00:09:02.126 sys 0m0.475s 00:09:02.126 13:19:43 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.126 13:19:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:02.126 ************************************ 00:09:02.126 END TEST app_cmdline 00:09:02.126 ************************************ 00:09:02.126 13:19:43 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.126 13:19:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:02.126 13:19:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.126 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:02.126 ************************************ 00:09:02.126 START TEST version 00:09:02.126 ************************************ 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:02.126 * Looking for test storage... 00:09:02.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.126 13:19:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.126 13:19:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.126 13:19:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.126 13:19:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.126 13:19:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.126 13:19:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.126 13:19:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.126 13:19:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.126 13:19:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.126 13:19:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.126 13:19:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.126 13:19:43 version -- scripts/common.sh@344 -- # case "$op" in 00:09:02.126 13:19:43 version -- scripts/common.sh@345 -- # : 1 00:09:02.126 13:19:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.126 13:19:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.126 13:19:43 version -- scripts/common.sh@365 -- # decimal 1 00:09:02.126 13:19:43 version -- scripts/common.sh@353 -- # local d=1 00:09:02.126 13:19:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.126 13:19:43 version -- scripts/common.sh@355 -- # echo 1 00:09:02.126 13:19:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.126 13:19:43 version -- scripts/common.sh@366 -- # decimal 2 00:09:02.126 13:19:43 version -- scripts/common.sh@353 -- # local d=2 00:09:02.126 13:19:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.126 13:19:43 version -- scripts/common.sh@355 -- # echo 2 00:09:02.126 13:19:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.126 13:19:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.126 13:19:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.126 13:19:43 version -- scripts/common.sh@368 -- # return 0 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.126 --rc genhtml_branch_coverage=1 00:09:02.126 --rc genhtml_function_coverage=1 00:09:02.126 --rc genhtml_legend=1 00:09:02.126 --rc geninfo_all_blocks=1 00:09:02.126 --rc geninfo_unexecuted_blocks=1 00:09:02.126 00:09:02.126 ' 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.126 --rc genhtml_branch_coverage=1 00:09:02.126 --rc genhtml_function_coverage=1 00:09:02.126 --rc genhtml_legend=1 00:09:02.126 --rc geninfo_all_blocks=1 00:09:02.126 --rc geninfo_unexecuted_blocks=1 00:09:02.126 00:09:02.126 ' 00:09:02.126 13:19:43 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.126 --rc genhtml_branch_coverage=1 00:09:02.126 --rc genhtml_function_coverage=1 00:09:02.127 --rc genhtml_legend=1 00:09:02.127 --rc geninfo_all_blocks=1 00:09:02.127 --rc geninfo_unexecuted_blocks=1 00:09:02.127 00:09:02.127 ' 00:09:02.127 13:19:43 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.127 --rc genhtml_branch_coverage=1 00:09:02.127 --rc genhtml_function_coverage=1 00:09:02.127 --rc genhtml_legend=1 00:09:02.127 --rc geninfo_all_blocks=1 00:09:02.127 --rc geninfo_unexecuted_blocks=1 00:09:02.127 00:09:02.127 ' 00:09:02.127 13:19:43 version -- app/version.sh@17 -- # get_header_version major 00:09:02.127 13:19:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # cut -f2 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.127 13:19:43 version -- app/version.sh@17 -- # major=25 00:09:02.127 13:19:43 version -- app/version.sh@18 -- # get_header_version minor 00:09:02.127 13:19:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # cut -f2 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.127 13:19:43 version -- app/version.sh@18 -- # minor=1 00:09:02.127 13:19:43 version -- app/version.sh@19 -- # get_header_version patch 00:09:02.127 13:19:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # cut -f2 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.127 13:19:43 version -- app/version.sh@19 -- # patch=0 00:09:02.127 13:19:43 version -- app/version.sh@20 -- # get_header_version suffix 00:09:02.127 13:19:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # tr -d '"' 00:09:02.127 13:19:43 version -- app/version.sh@14 -- # cut -f2 00:09:02.127 13:19:43 version -- app/version.sh@20 -- # suffix=-pre 00:09:02.127 13:19:43 version -- app/version.sh@22 -- # version=25.1 00:09:02.127 13:19:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:02.127 13:19:43 version -- app/version.sh@28 -- # version=25.1rc0 00:09:02.127 13:19:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:02.127 13:19:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:02.127 13:19:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:02.127 13:19:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:02.127 00:09:02.127 real 0m0.200s 00:09:02.127 user 0m0.125s 00:09:02.127 sys 0m0.101s 00:09:02.127 13:19:43 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:02.127 13:19:43 version -- common/autotest_common.sh@10 -- # set +x 00:09:02.127 ************************************ 00:09:02.127 END TEST version 00:09:02.127 ************************************ 00:09:02.386 13:19:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:02.386 13:19:43 -- spdk/autotest.sh@194 -- # uname -s 00:09:02.386 13:19:43 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:02.386 13:19:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:02.386 13:19:43 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:02.386 13:19:43 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:02.386 13:19:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.386 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:02.386 13:19:43 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:02.386 13:19:43 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:02.386 13:19:43 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.386 13:19:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.386 13:19:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.386 13:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:02.386 ************************************ 00:09:02.386 START TEST nvmf_tcp 00:09:02.386 ************************************ 00:09:02.386 13:19:43 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:02.386 * Looking for test storage... 00:09:02.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.386 13:19:43 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.386 13:19:43 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.386 13:19:43 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.386 13:19:44 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.386 --rc genhtml_branch_coverage=1 00:09:02.386 --rc genhtml_function_coverage=1 00:09:02.386 --rc genhtml_legend=1 00:09:02.386 --rc geninfo_all_blocks=1 00:09:02.386 --rc geninfo_unexecuted_blocks=1 00:09:02.386 00:09:02.386 ' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.386 --rc genhtml_branch_coverage=1 00:09:02.386 --rc genhtml_function_coverage=1 00:09:02.386 --rc genhtml_legend=1 00:09:02.386 --rc geninfo_all_blocks=1 00:09:02.386 --rc geninfo_unexecuted_blocks=1 00:09:02.386 00:09:02.386 ' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.386 --rc genhtml_branch_coverage=1 00:09:02.386 --rc genhtml_function_coverage=1 00:09:02.386 --rc genhtml_legend=1 00:09:02.386 --rc geninfo_all_blocks=1 00:09:02.386 --rc geninfo_unexecuted_blocks=1 00:09:02.386 00:09:02.386 ' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.386 --rc genhtml_branch_coverage=1 00:09:02.386 --rc genhtml_function_coverage=1 00:09:02.386 --rc genhtml_legend=1 00:09:02.386 --rc geninfo_all_blocks=1 00:09:02.386 --rc geninfo_unexecuted_blocks=1 00:09:02.386 00:09:02.386 ' 00:09:02.386 13:19:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:02.386 13:19:44 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.386 13:19:44 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.386 13:19:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.386 ************************************ 00:09:02.386 START TEST nvmf_target_core 00:09:02.386 ************************************ 00:09:02.386 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:02.646 * Looking for test storage... 00:09:02.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.646 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.647 --rc genhtml_branch_coverage=1 00:09:02.647 --rc genhtml_function_coverage=1 00:09:02.647 --rc genhtml_legend=1 00:09:02.647 --rc geninfo_all_blocks=1 00:09:02.647 --rc geninfo_unexecuted_blocks=1 00:09:02.647 00:09:02.647 ' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.647 --rc genhtml_branch_coverage=1 00:09:02.647 --rc genhtml_function_coverage=1 00:09:02.647 --rc genhtml_legend=1 00:09:02.647 --rc geninfo_all_blocks=1 00:09:02.647 --rc geninfo_unexecuted_blocks=1 00:09:02.647 00:09:02.647 ' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.647 --rc genhtml_branch_coverage=1 00:09:02.647 --rc genhtml_function_coverage=1 00:09:02.647 --rc genhtml_legend=1 00:09:02.647 --rc geninfo_all_blocks=1 00:09:02.647 --rc geninfo_unexecuted_blocks=1 00:09:02.647 00:09:02.647 ' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.647 --rc genhtml_branch_coverage=1 00:09:02.647 --rc genhtml_function_coverage=1 00:09:02.647 --rc genhtml_legend=1 00:09:02.647 --rc geninfo_all_blocks=1 00:09:02.647 --rc geninfo_unexecuted_blocks=1 00:09:02.647 00:09:02.647 ' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.647 ************************************ 00:09:02.647 START TEST nvmf_abort 00:09:02.647 ************************************ 00:09:02.647 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:02.647 * Looking for test storage... 00:09:02.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.648 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:02.648 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lcov --version 00:09:02.648 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.907 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:02.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.907 --rc genhtml_branch_coverage=1 00:09:02.907 --rc genhtml_function_coverage=1 00:09:02.907 --rc genhtml_legend=1 00:09:02.908 --rc geninfo_all_blocks=1 00:09:02.908 --rc geninfo_unexecuted_blocks=1 00:09:02.908 00:09:02.908 ' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.908 --rc genhtml_branch_coverage=1 00:09:02.908 --rc genhtml_function_coverage=1 00:09:02.908 --rc genhtml_legend=1 00:09:02.908 --rc geninfo_all_blocks=1 00:09:02.908 --rc geninfo_unexecuted_blocks=1 00:09:02.908 00:09:02.908 ' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.908 --rc genhtml_branch_coverage=1 00:09:02.908 --rc genhtml_function_coverage=1 00:09:02.908 --rc genhtml_legend=1 00:09:02.908 --rc geninfo_all_blocks=1 00:09:02.908 --rc geninfo_unexecuted_blocks=1 00:09:02.908 00:09:02.908 ' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:02.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.908 --rc genhtml_branch_coverage=1 00:09:02.908 --rc genhtml_function_coverage=1 00:09:02.908 --rc genhtml_legend=1 00:09:02.908 --rc geninfo_all_blocks=1 00:09:02.908 --rc geninfo_unexecuted_blocks=1 00:09:02.908 00:09:02.908 ' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.908 13:19:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.813 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:09:04.814 Found 0000:09:00.0 (0x8086 - 0x1592) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:09:04.814 Found 0000:09:00.1 (0x8086 - 0x1592) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:04.814 Found net devices under 0000:09:00.0: cvl_0_0 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:04.814 Found net devices under 0000:09:00.1: cvl_0_1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # is_hw=yes 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:04.814 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:09:05.073 00:09:05.073 --- 10.0.0.2 ping statistics --- 00:09:05.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.073 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:05.073 00:09:05.073 --- 10.0.0.1 ping statistics --- 00:09:05.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.073 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # return 0 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # nvmfpid=1696889 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # waitforlisten 1696889 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 1696889 ']' 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.073 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.073 [2024-10-07 13:19:46.630436] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:05.073 [2024-10-07 13:19:46.630535] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.073 [2024-10-07 13:19:46.691880] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:05.332 [2024-10-07 13:19:46.794327] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.332 [2024-10-07 13:19:46.794388] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.332 [2024-10-07 13:19:46.794410] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.332 [2024-10-07 13:19:46.794420] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.332 [2024-10-07 13:19:46.794429] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.332 [2024-10-07 13:19:46.795257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.332 [2024-10-07 13:19:46.795318] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.332 [2024-10-07 13:19:46.795322] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 [2024-10-07 13:19:46.946277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 Malloc0 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 Delay0 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 [2024-10-07 13:19:47.016136] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.332 13:19:47 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:05.592 [2024-10-07 13:19:47.162785] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:08.126 Initializing NVMe Controllers 00:09:08.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:08.126 controller IO queue size 128 less than required 00:09:08.126 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:08.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:08.126 Initialization complete. Launching workers. 00:09:08.126 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 27940 00:09:08.126 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28001, failed to submit 62 00:09:08.126 success 27944, unsuccessful 57, failed 0 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.126 rmmod nvme_tcp 00:09:08.126 rmmod nvme_fabrics 00:09:08.126 rmmod nvme_keyring 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@515 -- # '[' -n 1696889 ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # killprocess 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 1696889 ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1696889' 00:09:08.126 killing process with pid 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 1696889 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-save 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@789 -- # iptables-restore 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.126 13:19:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:10.666 00:09:10.666 real 0m7.501s 00:09:10.666 user 0m10.967s 00:09:10.666 sys 0m2.638s 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:10.666 ************************************ 00:09:10.666 END TEST nvmf_abort 00:09:10.666 ************************************ 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:10.666 ************************************ 00:09:10.666 START TEST nvmf_ns_hotplug_stress 00:09:10.666 ************************************ 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:10.666 * Looking for test storage... 00:09:10.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:10.666 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.667 --rc genhtml_branch_coverage=1 00:09:10.667 --rc genhtml_function_coverage=1 00:09:10.667 --rc genhtml_legend=1 00:09:10.667 --rc geninfo_all_blocks=1 00:09:10.667 --rc geninfo_unexecuted_blocks=1 00:09:10.667 00:09:10.667 ' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.667 --rc genhtml_branch_coverage=1 00:09:10.667 --rc genhtml_function_coverage=1 00:09:10.667 --rc genhtml_legend=1 00:09:10.667 --rc geninfo_all_blocks=1 00:09:10.667 --rc geninfo_unexecuted_blocks=1 00:09:10.667 00:09:10.667 ' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.667 --rc genhtml_branch_coverage=1 00:09:10.667 --rc genhtml_function_coverage=1 00:09:10.667 --rc genhtml_legend=1 00:09:10.667 --rc geninfo_all_blocks=1 00:09:10.667 --rc geninfo_unexecuted_blocks=1 00:09:10.667 00:09:10.667 ' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:10.667 --rc genhtml_branch_coverage=1 00:09:10.667 --rc genhtml_function_coverage=1 00:09:10.667 --rc genhtml_legend=1 00:09:10.667 --rc geninfo_all_blocks=1 00:09:10.667 --rc geninfo_unexecuted_blocks=1 00:09:10.667 00:09:10.667 ' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:10.667 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:10.667 13:19:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:09:12.574 Found 0000:09:00.0 (0x8086 - 0x1592) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:09:12.574 Found 0000:09:00.1 (0x8086 - 0x1592) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:12.574 Found net devices under 0000:09:00.0: cvl_0_0 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:12.574 Found net devices under 0000:09:00.1: cvl_0_1 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.574 13:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:12.574 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.574 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:09:12.574 00:09:12.574 --- 10.0.0.2 ping statistics --- 00:09:12.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.574 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:09:12.574 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:09:12.575 00:09:12.575 --- 10.0.0.1 ping statistics --- 00:09:12.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.575 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # return 0 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # nvmfpid=1699045 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # waitforlisten 1699045 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 1699045 ']' 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.575 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.575 [2024-10-07 13:19:54.146812] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:12.575 [2024-10-07 13:19:54.146891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.575 [2024-10-07 13:19:54.208681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:12.833 [2024-10-07 13:19:54.318061] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.833 [2024-10-07 13:19:54.318114] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.833 [2024-10-07 13:19:54.318138] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.833 [2024-10-07 13:19:54.318149] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.833 [2024-10-07 13:19:54.318158] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.833 [2024-10-07 13:19:54.318943] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.833 [2024-10-07 13:19:54.319018] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.833 [2024-10-07 13:19:54.319021] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:12.833 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:13.091 [2024-10-07 13:19:54.707905] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.091 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:13.348 13:19:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.606 [2024-10-07 13:19:55.254467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.606 13:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:13.865 13:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:14.124 Malloc0 00:09:14.124 13:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:14.384 Delay0 00:09:14.384 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:14.951 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:14.952 NULL1 00:09:14.952 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:15.210 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1699423 00:09:15.210 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:15.210 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:15.210 13:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.469 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.035 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:16.035 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:16.035 true 00:09:16.035 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:16.035 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:16.293 13:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:16.552 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:16.552 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:16.811 true 00:09:17.070 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:17.070 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:17.351 13:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:17.648 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:17.648 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:17.648 true 00:09:17.648 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:17.648 13:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:19.027 Read completed with error (sct=0, sc=11) 00:09:19.027 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:19.027 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:19.027 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:19.286 true 00:09:19.286 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:19.286 13:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.225 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:20.484 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:20.484 13:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:20.742 true 00:09:20.742 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:20.742 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:20.999 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:21.257 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:21.257 13:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:21.515 true 00:09:21.515 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:21.515 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:21.773 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:22.031 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:22.031 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:22.289 true 00:09:22.289 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:22.289 13:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.222 13:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:23.480 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:23.480 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:23.738 true 00:09:23.738 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:23.738 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:23.996 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:24.254 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:24.254 13:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:24.555 true 00:09:24.555 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:24.555 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:24.838 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:25.096 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:25.097 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:25.354 true 00:09:25.354 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:25.354 13:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:26.289 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:26.289 13:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.546 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:26.546 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:26.804 true 00:09:26.804 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:26.804 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.062 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.321 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:27.321 13:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:27.579 true 00:09:27.579 13:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:27.579 13:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.511 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.511 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:28.769 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:28.769 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:29.027 true 00:09:29.027 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:29.027 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.285 13:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.543 13:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:29.543 13:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:29.802 true 00:09:29.802 13:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:29.802 13:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.060 13:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.628 13:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:30.628 13:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:30.628 true 00:09:30.628 13:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:30.628 13:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.004 13:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.004 13:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:32.004 13:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:32.262 true 00:09:32.262 13:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:32.262 13:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.520 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.777 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:32.777 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:33.036 true 00:09:33.036 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:33.036 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.293 13:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.552 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:33.552 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:33.810 true 00:09:33.810 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:33.810 13:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.748 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.006 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:35.006 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:35.264 true 00:09:35.264 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:35.264 13:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.522 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.780 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:35.780 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:36.038 true 00:09:36.038 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:36.038 13:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:36.298 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:36.868 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:36.868 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:36.868 true 00:09:36.868 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:36.868 13:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.806 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.806 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.064 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:38.064 13:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:38.322 true 00:09:38.322 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:38.322 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.890 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.890 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:38.890 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:39.148 true 00:09:39.148 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:39.148 13:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:39.407 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:39.974 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:39.974 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:39.974 true 00:09:39.974 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:39.974 13:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.911 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.911 13:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.169 13:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:41.169 13:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:41.427 true 00:09:41.427 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:41.427 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:41.685 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.254 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:42.254 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:42.254 true 00:09:42.254 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:42.254 13:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.512 13:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.770 13:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:42.770 13:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:43.030 true 00:09:43.288 13:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:43.288 13:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:44.228 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.487 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:44.487 13:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:44.744 true 00:09:44.744 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:44.744 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.002 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.260 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:45.260 13:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:45.517 true 00:09:45.517 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:45.517 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.776 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.776 Initializing NVMe Controllers 00:09:45.776 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:45.776 Controller IO queue size 128, less than required. 00:09:45.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.776 Controller IO queue size 128, less than required. 00:09:45.776 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:45.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:45.776 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:45.776 Initialization complete. Launching workers. 00:09:45.776 ======================================================== 00:09:45.776 Latency(us) 00:09:45.776 Device Information : IOPS MiB/s Average min max 00:09:45.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 578.31 0.28 82927.31 2963.74 1018019.01 00:09:45.776 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7872.17 3.84 16213.13 3521.32 536625.26 00:09:45.776 ======================================================== 00:09:45.776 Total : 8450.48 4.13 20778.72 2963.74 1018019.01 00:09:45.776 00:09:46.034 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:46.034 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:46.292 true 00:09:46.292 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1699423 00:09:46.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1699423) - No such process 00:09:46.292 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1699423 00:09:46.292 13:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.550 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:46.807 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:46.807 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:46.807 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:46.807 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:46.807 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:47.064 null0 00:09:47.064 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.064 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.064 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:47.321 null1 00:09:47.321 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.321 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.321 13:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:47.579 null2 00:09:47.579 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.579 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.579 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:47.838 null3 00:09:47.838 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:47.838 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:47.838 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:48.095 null4 00:09:48.095 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.095 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.095 13:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:09:48.354 null5 00:09:48.354 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.354 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.354 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:09:48.612 null6 00:09:48.612 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.612 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.612 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:09:48.871 null7 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:48.871 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1703426 1703427 1703429 1703431 1703433 1703435 1703437 1703439 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.130 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:49.389 13:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:49.647 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:49.905 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:49.906 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:49.906 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.164 13:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:50.422 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.682 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:50.683 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.683 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.683 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:50.965 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.223 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.481 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.482 13:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:51.740 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:51.998 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.256 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.256 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.256 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.257 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:52.257 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.257 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.257 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:52.257 13:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:52.515 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:52.773 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.030 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.030 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.289 13:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:53.548 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:53.807 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.066 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:54.326 13:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:09:54.585 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:09:54.845 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:54.846 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:55.106 rmmod nvme_tcp 00:09:55.106 rmmod nvme_fabrics 00:09:55.106 rmmod nvme_keyring 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@515 -- # '[' -n 1699045 ']' 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # killprocess 1699045 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 1699045 ']' 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 1699045 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1699045 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1699045' 00:09:55.106 killing process with pid 1699045 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 1699045 00:09:55.106 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 1699045 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-save 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@789 -- # iptables-restore 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:55.365 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:55.366 13:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:57.913 00:09:57.913 real 0m47.208s 00:09:57.913 user 3m40.939s 00:09:57.913 sys 0m15.485s 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:57.913 ************************************ 00:09:57.913 END TEST nvmf_ns_hotplug_stress 00:09:57.913 ************************************ 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:57.913 ************************************ 00:09:57.913 START TEST nvmf_delete_subsystem 00:09:57.913 ************************************ 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:09:57.913 * Looking for test storage... 00:09:57.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lcov --version 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.913 --rc genhtml_branch_coverage=1 00:09:57.913 --rc genhtml_function_coverage=1 00:09:57.913 --rc genhtml_legend=1 00:09:57.913 --rc geninfo_all_blocks=1 00:09:57.913 --rc geninfo_unexecuted_blocks=1 00:09:57.913 00:09:57.913 ' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.913 --rc genhtml_branch_coverage=1 00:09:57.913 --rc genhtml_function_coverage=1 00:09:57.913 --rc genhtml_legend=1 00:09:57.913 --rc geninfo_all_blocks=1 00:09:57.913 --rc geninfo_unexecuted_blocks=1 00:09:57.913 00:09:57.913 ' 00:09:57.913 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:57.913 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.914 --rc genhtml_branch_coverage=1 00:09:57.914 --rc genhtml_function_coverage=1 00:09:57.914 --rc genhtml_legend=1 00:09:57.914 --rc geninfo_all_blocks=1 00:09:57.914 --rc geninfo_unexecuted_blocks=1 00:09:57.914 00:09:57.914 ' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:57.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:57.914 --rc genhtml_branch_coverage=1 00:09:57.914 --rc genhtml_function_coverage=1 00:09:57.914 --rc genhtml_legend=1 00:09:57.914 --rc geninfo_all_blocks=1 00:09:57.914 --rc geninfo_unexecuted_blocks=1 00:09:57.914 00:09:57.914 ' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:57.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:57.914 13:20:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:59.822 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:09:59.823 Found 0000:09:00.0 (0x8086 - 0x1592) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:09:59.823 Found 0000:09:00.1 (0x8086 - 0x1592) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:59.823 Found net devices under 0000:09:00.0: cvl_0_0 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:59.823 Found net devices under 0000:09:00.1: cvl_0_1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # is_hw=yes 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:59.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:59.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:09:59.823 00:09:59.823 --- 10.0.0.2 ping statistics --- 00:09:59.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.823 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:59.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:59.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:09:59.823 00:09:59.823 --- 10.0.0.1 ping statistics --- 00:09:59.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:59.823 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # return 0 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.823 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # nvmfpid=1706090 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # waitforlisten 1706090 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 1706090 ']' 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.824 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.824 [2024-10-07 13:20:41.529543] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:09:59.824 [2024-10-07 13:20:41.529627] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.083 [2024-10-07 13:20:41.591014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:00.083 [2024-10-07 13:20:41.697882] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.083 [2024-10-07 13:20:41.697943] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.083 [2024-10-07 13:20:41.697956] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.083 [2024-10-07 13:20:41.697967] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.083 [2024-10-07 13:20:41.697978] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.083 [2024-10-07 13:20:41.700690] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.083 [2024-10-07 13:20:41.700702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 [2024-10-07 13:20:41.842070] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 [2024-10-07 13:20:41.858329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 NULL1 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 Delay0 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1706224 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:00.342 13:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:00.342 [2024-10-07 13:20:41.933123] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:02.249 13:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:02.250 13:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.250 13:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 [2024-10-07 13:20:44.149707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fa800d640 is same with the state(6) to be set 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 starting I/O failed: -6 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.510 Write completed with error (sct=0, sc=8) 00:10:02.510 Read completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Read completed with error (sct=0, sc=8) 00:10:02.511 Read completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Read completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Read completed with error (sct=0, sc=8) 00:10:02.511 Write completed with error (sct=0, sc=8) 00:10:02.511 Read completed with error (sct=0, sc=8) 00:10:03.450 [2024-10-07 13:20:45.109869] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfea70 is same with the state(6) to be set 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 [2024-10-07 13:20:45.149297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f5fa800d310 is same with the state(6) to be set 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 [2024-10-07 13:20:45.152160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd930 is same with the state(6) to be set 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Write completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.450 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 [2024-10-07 13:20:45.152988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd390 is same with the state(6) to be set 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Write completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 Read completed with error (sct=0, sc=8) 00:10:03.451 [2024-10-07 13:20:45.153207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfd570 is same with the state(6) to be set 00:10:03.451 Initializing NVMe Controllers 00:10:03.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:03.451 Controller IO queue size 128, less than required. 00:10:03.451 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:03.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:03.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:03.451 Initialization complete. Launching workers. 00:10:03.451 ======================================================== 00:10:03.451 Latency(us) 00:10:03.451 Device Information : IOPS MiB/s Average min max 00:10:03.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 181.11 0.09 961604.69 857.70 1011644.18 00:10:03.451 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.84 0.07 894513.15 516.06 1012666.51 00:10:03.451 ======================================================== 00:10:03.451 Total : 331.95 0.16 931117.65 516.06 1012666.51 00:10:03.451 00:10:03.451 [2024-10-07 13:20:45.153976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfea70 (9): Bad file descriptor 00:10:03.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:03.451 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.451 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:03.451 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1706224 00:10:03.451 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1706224 00:10:04.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1706224) - No such process 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1706224 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1706224 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1706224 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.022 [2024-10-07 13:20:45.677042] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1706620 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:04.022 13:20:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:04.281 [2024-10-07 13:20:45.742246] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:04.540 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:04.540 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:04.540 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.109 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.109 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:05.109 13:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:05.684 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:05.684 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:05.684 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.255 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.255 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:06.255 13:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:06.514 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:06.514 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:06.514 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.083 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.083 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:07.083 13:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:07.343 Initializing NVMe Controllers 00:10:07.343 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.343 Controller IO queue size 128, less than required. 00:10:07.343 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:07.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:07.343 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:07.343 Initialization complete. Launching workers. 00:10:07.343 ======================================================== 00:10:07.343 Latency(us) 00:10:07.343 Device Information : IOPS MiB/s Average min max 00:10:07.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005193.05 1000228.42 1040911.20 00:10:07.343 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004349.98 1000184.35 1041642.35 00:10:07.343 ======================================================== 00:10:07.343 Total : 256.00 0.12 1004771.52 1000184.35 1041642.35 00:10:07.343 00:10:07.602 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:07.602 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1706620 00:10:07.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1706620) - No such process 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1706620 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:07.603 rmmod nvme_tcp 00:10:07.603 rmmod nvme_fabrics 00:10:07.603 rmmod nvme_keyring 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@515 -- # '[' -n 1706090 ']' 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # killprocess 1706090 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 1706090 ']' 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 1706090 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1706090 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1706090' 00:10:07.603 killing process with pid 1706090 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 1706090 00:10:07.603 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 1706090 00:10:07.861 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:07.861 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:07.861 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:07.861 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-save 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@789 -- # iptables-restore 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.862 13:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.400 00:10:10.400 real 0m12.538s 00:10:10.400 user 0m28.216s 00:10:10.400 sys 0m2.963s 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:10.400 ************************************ 00:10:10.400 END TEST nvmf_delete_subsystem 00:10:10.400 ************************************ 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.400 ************************************ 00:10:10.400 START TEST nvmf_host_management 00:10:10.400 ************************************ 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:10.400 * Looking for test storage... 00:10:10.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.400 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:10.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.401 --rc genhtml_branch_coverage=1 00:10:10.401 --rc genhtml_function_coverage=1 00:10:10.401 --rc genhtml_legend=1 00:10:10.401 --rc geninfo_all_blocks=1 00:10:10.401 --rc geninfo_unexecuted_blocks=1 00:10:10.401 00:10:10.401 ' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:10.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.401 --rc genhtml_branch_coverage=1 00:10:10.401 --rc genhtml_function_coverage=1 00:10:10.401 --rc genhtml_legend=1 00:10:10.401 --rc geninfo_all_blocks=1 00:10:10.401 --rc geninfo_unexecuted_blocks=1 00:10:10.401 00:10:10.401 ' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:10.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.401 --rc genhtml_branch_coverage=1 00:10:10.401 --rc genhtml_function_coverage=1 00:10:10.401 --rc genhtml_legend=1 00:10:10.401 --rc geninfo_all_blocks=1 00:10:10.401 --rc geninfo_unexecuted_blocks=1 00:10:10.401 00:10:10.401 ' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:10.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.401 --rc genhtml_branch_coverage=1 00:10:10.401 --rc genhtml_function_coverage=1 00:10:10.401 --rc genhtml_legend=1 00:10:10.401 --rc geninfo_all_blocks=1 00:10:10.401 --rc geninfo_unexecuted_blocks=1 00:10:10.401 00:10:10.401 ' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.401 13:20:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.335 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:12.336 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:12.336 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:12.336 Found net devices under 0000:09:00.0: cvl_0_0 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:12.336 Found net devices under 0000:09:00.1: cvl_0_1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # is_hw=yes 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:10:12.336 00:10:12.336 --- 10.0.0.2 ping statistics --- 00:10:12.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.336 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:10:12.336 00:10:12.336 --- 10.0.0.1 ping statistics --- 00:10:12.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.336 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # return 0 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:12.336 13:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # nvmfpid=1708889 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # waitforlisten 1708889 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1708889 ']' 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.336 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.337 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.337 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.623 [2024-10-07 13:20:54.067099] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:12.623 [2024-10-07 13:20:54.067180] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.623 [2024-10-07 13:20:54.131240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.623 [2024-10-07 13:20:54.241870] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.623 [2024-10-07 13:20:54.241940] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.623 [2024-10-07 13:20:54.241954] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.623 [2024-10-07 13:20:54.241965] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.623 [2024-10-07 13:20:54.241974] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.623 [2024-10-07 13:20:54.243710] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.623 [2024-10-07 13:20:54.243741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.623 [2024-10-07 13:20:54.243792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:10:12.623 [2024-10-07 13:20:54.243796] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 [2024-10-07 13:20:54.406889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 Malloc0 00:10:12.889 [2024-10-07 13:20:54.469761] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1709023 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1709023 /var/tmp/bdevperf.sock 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 1709023 ']' 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:12.889 { 00:10:12.889 "params": { 00:10:12.889 "name": "Nvme$subsystem", 00:10:12.889 "trtype": "$TEST_TRANSPORT", 00:10:12.889 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:12.889 "adrfam": "ipv4", 00:10:12.889 "trsvcid": "$NVMF_PORT", 00:10:12.889 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:12.889 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:12.889 "hdgst": ${hdgst:-false}, 00:10:12.889 "ddgst": ${ddgst:-false} 00:10:12.889 }, 00:10:12.889 "method": "bdev_nvme_attach_controller" 00:10:12.889 } 00:10:12.889 EOF 00:10:12.889 )") 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:12.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:12.889 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:12.889 "params": { 00:10:12.890 "name": "Nvme0", 00:10:12.890 "trtype": "tcp", 00:10:12.890 "traddr": "10.0.0.2", 00:10:12.890 "adrfam": "ipv4", 00:10:12.890 "trsvcid": "4420", 00:10:12.890 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:12.890 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:12.890 "hdgst": false, 00:10:12.890 "ddgst": false 00:10:12.890 }, 00:10:12.890 "method": "bdev_nvme_attach_controller" 00:10:12.890 }' 00:10:12.890 [2024-10-07 13:20:54.545329] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:12.890 [2024-10-07 13:20:54.545419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709023 ] 00:10:13.149 [2024-10-07 13:20:54.604775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.149 [2024-10-07 13:20:54.716716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.408 Running I/O for 10 seconds... 00:10:13.408 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.408 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:10:13.408 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:13.408 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.408 13:20:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:13.408 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.669 [2024-10-07 13:20:55.353046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.669 [2024-10-07 13:20:55.353143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:13.669 [2024-10-07 13:20:55.353319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.669 [2024-10-07 13:20:55.353454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.669 [2024-10-07 13:20:55.353559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.669 [2024-10-07 13:20:55.353788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.669 [2024-10-07 13:20:55.353802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.353975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.353989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.670 [2024-10-07 13:20:55.354894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.670 [2024-10-07 13:20:55.354907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.354923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.354937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.354962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.354980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.354996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.355011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.355040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.355114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:13.671 [2024-10-07 13:20:55.355143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1864cd0 is same with the state(6) to be set 00:10:13.671 [2024-10-07 13:20:55.355228] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1864cd0 was disconnected and freed. reset controller. 00:10:13.671 [2024-10-07 13:20:55.355303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.671 [2024-10-07 13:20:55.355341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.671 [2024-10-07 13:20:55.355371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.671 [2024-10-07 13:20:55.355400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:13.671 [2024-10-07 13:20:55.355427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:13.671 [2024-10-07 13:20:55.355440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x164caf0 is same with the state(6) to be set 00:10:13.671 [2024-10-07 13:20:55.356594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:10:13.671 task offset: 81792 on job bdev=Nvme0n1 fails 00:10:13.671 00:10:13.671 Latency(us) 00:10:13.671 [2024-10-07T11:20:55.383Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.671 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:13.671 Job: Nvme0n1 ended in about 0.41 seconds with error 00:10:13.671 Verification LBA range: start 0x0 length 0x400 00:10:13.671 Nvme0n1 : 0.41 1404.73 87.80 156.08 0.00 39873.88 7233.23 39418.69 00:10:13.671 [2024-10-07T11:20:55.383Z] =================================================================================================================== 00:10:13.671 [2024-10-07T11:20:55.383Z] Total : 1404.73 87.80 156.08 0.00 39873.88 7233.23 39418.69 00:10:13.671 [2024-10-07 13:20:55.358548] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:13.671 [2024-10-07 13:20:55.358576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164caf0 (9): Bad file descriptor 00:10:13.671 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.671 13:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:13.929 [2024-10-07 13:20:55.450841] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1709023 00:10:14.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1709023) - No such process 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # config=() 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # local subsystem config 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:10:14.867 { 00:10:14.867 "params": { 00:10:14.867 "name": "Nvme$subsystem", 00:10:14.867 "trtype": "$TEST_TRANSPORT", 00:10:14.867 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.867 "adrfam": "ipv4", 00:10:14.867 "trsvcid": "$NVMF_PORT", 00:10:14.867 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.867 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.867 "hdgst": ${hdgst:-false}, 00:10:14.867 "ddgst": ${ddgst:-false} 00:10:14.867 }, 00:10:14.867 "method": "bdev_nvme_attach_controller" 00:10:14.867 } 00:10:14.867 EOF 00:10:14.867 )") 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # cat 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # jq . 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@583 -- # IFS=, 00:10:14.867 13:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:10:14.868 "params": { 00:10:14.868 "name": "Nvme0", 00:10:14.868 "trtype": "tcp", 00:10:14.868 "traddr": "10.0.0.2", 00:10:14.868 "adrfam": "ipv4", 00:10:14.868 "trsvcid": "4420", 00:10:14.868 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:14.868 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:14.868 "hdgst": false, 00:10:14.868 "ddgst": false 00:10:14.868 }, 00:10:14.868 "method": "bdev_nvme_attach_controller" 00:10:14.868 }' 00:10:14.868 [2024-10-07 13:20:56.413582] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:14.868 [2024-10-07 13:20:56.413686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1709291 ] 00:10:14.868 [2024-10-07 13:20:56.471286] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.126 [2024-10-07 13:20:56.584292] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.385 Running I/O for 1 seconds... 00:10:16.323 1664.00 IOPS, 104.00 MiB/s 00:10:16.323 Latency(us) 00:10:16.323 [2024-10-07T11:20:58.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:16.323 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:16.323 Verification LBA range: start 0x0 length 0x400 00:10:16.323 Nvme0n1 : 1.01 1713.72 107.11 0.00 0.00 36729.93 5291.43 33010.73 00:10:16.323 [2024-10-07T11:20:58.035Z] =================================================================================================================== 00:10:16.323 [2024-10-07T11:20:58.035Z] Total : 1713.72 107.11 0.00 0.00 36729.93 5291.43 33010.73 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.583 rmmod nvme_tcp 00:10:16.583 rmmod nvme_fabrics 00:10:16.583 rmmod nvme_keyring 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@515 -- # '[' -n 1708889 ']' 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # killprocess 1708889 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 1708889 ']' 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 1708889 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1708889 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1708889' 00:10:16.583 killing process with pid 1708889 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 1708889 00:10:16.583 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 1708889 00:10:16.840 [2024-10-07 13:20:58.530420] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-save 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@789 -- # iptables-restore 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.101 13:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:19.009 00:10:19.009 real 0m8.948s 00:10:19.009 user 0m20.397s 00:10:19.009 sys 0m2.741s 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:19.009 ************************************ 00:10:19.009 END TEST nvmf_host_management 00:10:19.009 ************************************ 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.009 ************************************ 00:10:19.009 START TEST nvmf_lvol 00:10:19.009 ************************************ 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:19.009 * Looking for test storage... 00:10:19.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:10:19.009 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.270 --rc genhtml_branch_coverage=1 00:10:19.270 --rc genhtml_function_coverage=1 00:10:19.270 --rc genhtml_legend=1 00:10:19.270 --rc geninfo_all_blocks=1 00:10:19.270 --rc geninfo_unexecuted_blocks=1 00:10:19.270 00:10:19.270 ' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.270 --rc genhtml_branch_coverage=1 00:10:19.270 --rc genhtml_function_coverage=1 00:10:19.270 --rc genhtml_legend=1 00:10:19.270 --rc geninfo_all_blocks=1 00:10:19.270 --rc geninfo_unexecuted_blocks=1 00:10:19.270 00:10:19.270 ' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.270 --rc genhtml_branch_coverage=1 00:10:19.270 --rc genhtml_function_coverage=1 00:10:19.270 --rc genhtml_legend=1 00:10:19.270 --rc geninfo_all_blocks=1 00:10:19.270 --rc geninfo_unexecuted_blocks=1 00:10:19.270 00:10:19.270 ' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:19.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.270 --rc genhtml_branch_coverage=1 00:10:19.270 --rc genhtml_function_coverage=1 00:10:19.270 --rc genhtml_legend=1 00:10:19.270 --rc geninfo_all_blocks=1 00:10:19.270 --rc geninfo_unexecuted_blocks=1 00:10:19.270 00:10:19.270 ' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.270 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:19.271 13:21:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.176 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:21.177 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:21.177 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:21.177 Found net devices under 0000:09:00.0: cvl_0_0 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:21.177 Found net devices under 0000:09:00.1: cvl_0_1 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # is_hw=yes 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.177 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.436 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:21.437 00:10:21.437 --- 10.0.0.2 ping statistics --- 00:10:21.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.437 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:10:21.437 00:10:21.437 --- 10.0.0.1 ping statistics --- 00:10:21.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.437 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # return 0 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:21.437 13:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # nvmfpid=1711435 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # waitforlisten 1711435 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 1711435 ']' 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.437 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.437 [2024-10-07 13:21:03.067006] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:21.437 [2024-10-07 13:21:03.067108] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.437 [2024-10-07 13:21:03.132961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.695 [2024-10-07 13:21:03.241203] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.695 [2024-10-07 13:21:03.241255] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.695 [2024-10-07 13:21:03.241287] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.695 [2024-10-07 13:21:03.241300] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.695 [2024-10-07 13:21:03.241310] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.695 [2024-10-07 13:21:03.242149] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.695 [2024-10-07 13:21:03.242210] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.695 [2024-10-07 13:21:03.242214] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.695 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:21.954 [2024-10-07 13:21:03.628131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.954 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.524 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:22.524 13:21:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.784 13:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:22.784 13:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:23.043 13:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:23.301 13:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cb28d846-78b6-42af-9886-de0678d594eb 00:10:23.301 13:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cb28d846-78b6-42af-9886-de0678d594eb lvol 20 00:10:23.559 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7ac73fa0-1587-4e39-bc88-dc9f357881ff 00:10:23.559 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:23.816 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7ac73fa0-1587-4e39-bc88-dc9f357881ff 00:10:24.074 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:24.331 [2024-10-07 13:21:05.856413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.331 13:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:24.588 13:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1711809 00:10:24.588 13:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:24.588 13:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:25.521 13:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7ac73fa0-1587-4e39-bc88-dc9f357881ff MY_SNAPSHOT 00:10:25.778 13:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=69ce3117-4166-435c-b192-f223b7ea31b2 00:10:25.778 13:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7ac73fa0-1587-4e39-bc88-dc9f357881ff 30 00:10:26.344 13:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 69ce3117-4166-435c-b192-f223b7ea31b2 MY_CLONE 00:10:26.602 13:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=228bfd7f-af55-47b6-8113-55e1c81d9aad 00:10:26.602 13:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 228bfd7f-af55-47b6-8113-55e1c81d9aad 00:10:27.172 13:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1711809 00:10:35.278 Initializing NVMe Controllers 00:10:35.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:35.278 Controller IO queue size 128, less than required. 00:10:35.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:35.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:35.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:35.278 Initialization complete. Launching workers. 00:10:35.278 ======================================================== 00:10:35.278 Latency(us) 00:10:35.278 Device Information : IOPS MiB/s Average min max 00:10:35.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10558.10 41.24 12124.96 619.35 69427.61 00:10:35.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10511.60 41.06 12179.50 2300.29 66332.99 00:10:35.278 ======================================================== 00:10:35.278 Total : 21069.70 82.30 12152.17 619.35 69427.61 00:10:35.278 00:10:35.278 13:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:35.278 13:21:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7ac73fa0-1587-4e39-bc88-dc9f357881ff 00:10:35.536 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cb28d846-78b6-42af-9886-de0678d594eb 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # nvmfcleanup 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.794 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.794 rmmod nvme_tcp 00:10:35.794 rmmod nvme_fabrics 00:10:36.053 rmmod nvme_keyring 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@515 -- # '[' -n 1711435 ']' 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # killprocess 1711435 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 1711435 ']' 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 1711435 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1711435 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1711435' 00:10:36.053 killing process with pid 1711435 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 1711435 00:10:36.053 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 1711435 00:10:36.313 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-save 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@789 -- # iptables-restore 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.314 13:21:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.221 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.221 00:10:38.221 real 0m19.272s 00:10:38.221 user 1m5.896s 00:10:38.221 sys 0m5.403s 00:10:38.221 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:38.221 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:38.221 ************************************ 00:10:38.221 END TEST nvmf_lvol 00:10:38.221 ************************************ 00:10:38.480 13:21:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:38.480 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:38.480 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:38.480 13:21:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.480 ************************************ 00:10:38.480 START TEST nvmf_lvs_grow 00:10:38.480 ************************************ 00:10:38.480 13:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:38.480 * Looking for test storage... 00:10:38.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.480 --rc genhtml_branch_coverage=1 00:10:38.480 --rc genhtml_function_coverage=1 00:10:38.480 --rc genhtml_legend=1 00:10:38.480 --rc geninfo_all_blocks=1 00:10:38.480 --rc geninfo_unexecuted_blocks=1 00:10:38.480 00:10:38.480 ' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.480 --rc genhtml_branch_coverage=1 00:10:38.480 --rc genhtml_function_coverage=1 00:10:38.480 --rc genhtml_legend=1 00:10:38.480 --rc geninfo_all_blocks=1 00:10:38.480 --rc geninfo_unexecuted_blocks=1 00:10:38.480 00:10:38.480 ' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.480 --rc genhtml_branch_coverage=1 00:10:38.480 --rc genhtml_function_coverage=1 00:10:38.480 --rc genhtml_legend=1 00:10:38.480 --rc geninfo_all_blocks=1 00:10:38.480 --rc geninfo_unexecuted_blocks=1 00:10:38.480 00:10:38.480 ' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:38.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.480 --rc genhtml_branch_coverage=1 00:10:38.480 --rc genhtml_function_coverage=1 00:10:38.480 --rc genhtml_legend=1 00:10:38.480 --rc geninfo_all_blocks=1 00:10:38.480 --rc geninfo_unexecuted_blocks=1 00:10:38.480 00:10:38.480 ' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.480 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # prepare_net_devs 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # local -g is_hw=no 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # remove_spdk_ns 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.481 13:21:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:10:41.019 Found 0000:09:00.0 (0x8086 - 0x1592) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:10:41.019 Found 0000:09:00.1 (0x8086 - 0x1592) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:41.019 Found net devices under 0000:09:00.0: cvl_0_0 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ up == up ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:41.019 Found net devices under 0000:09:00.1: cvl_0_1 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # is_hw=yes 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:41.019 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:41.020 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:41.020 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:10:41.020 00:10:41.020 --- 10.0.0.2 ping statistics --- 00:10:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.020 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:41.020 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:41.020 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:10:41.020 00:10:41.020 --- 10.0.0.1 ping statistics --- 00:10:41.020 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:41.020 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # return 0 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # nvmfpid=1715556 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # waitforlisten 1715556 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 1715556 ']' 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:41.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:41.020 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:41.020 [2024-10-07 13:21:22.475599] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:41.020 [2024-10-07 13:21:22.475700] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:41.020 [2024-10-07 13:21:22.535627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:41.020 [2024-10-07 13:21:22.636239] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:41.020 [2024-10-07 13:21:22.636293] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:41.020 [2024-10-07 13:21:22.636320] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:41.020 [2024-10-07 13:21:22.636331] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:41.020 [2024-10-07 13:21:22.636340] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:41.020 [2024-10-07 13:21:22.636831] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.278 13:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:41.536 [2024-10-07 13:21:23.011748] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:41.536 ************************************ 00:10:41.536 START TEST lvs_grow_clean 00:10:41.536 ************************************ 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:41.536 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:41.793 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:41.793 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:42.050 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:42.051 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:42.051 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:42.313 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:42.313 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:42.313 13:21:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5db2887f-f87e-48fe-b849-11f2f20ae88e lvol 150 00:10:42.591 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4449560b-85b7-4df8-8fe8-5c42aa1706cf 00:10:42.591 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:42.591 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:42.856 [2024-10-07 13:21:24.420054] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:42.856 [2024-10-07 13:21:24.420149] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:42.856 true 00:10:42.856 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:42.856 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:43.114 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:43.114 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:43.372 13:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4449560b-85b7-4df8-8fe8-5c42aa1706cf 00:10:43.629 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:43.887 [2024-10-07 13:21:25.491357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:43.887 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1715982 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1715982 /var/tmp/bdevperf.sock 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 1715982 ']' 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:44.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:44.145 13:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 [2024-10-07 13:21:25.821410] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:10:44.145 [2024-10-07 13:21:25.821496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1715982 ] 00:10:44.403 [2024-10-07 13:21:25.878568] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.403 [2024-10-07 13:21:25.985428] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.403 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.403 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:10:44.403 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:44.968 Nvme0n1 00:10:44.968 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:45.226 [ 00:10:45.226 { 00:10:45.226 "name": "Nvme0n1", 00:10:45.226 "aliases": [ 00:10:45.226 "4449560b-85b7-4df8-8fe8-5c42aa1706cf" 00:10:45.226 ], 00:10:45.226 "product_name": "NVMe disk", 00:10:45.226 "block_size": 4096, 00:10:45.226 "num_blocks": 38912, 00:10:45.226 "uuid": "4449560b-85b7-4df8-8fe8-5c42aa1706cf", 00:10:45.226 "numa_id": 0, 00:10:45.226 "assigned_rate_limits": { 00:10:45.226 "rw_ios_per_sec": 0, 00:10:45.226 "rw_mbytes_per_sec": 0, 00:10:45.226 "r_mbytes_per_sec": 0, 00:10:45.226 "w_mbytes_per_sec": 0 00:10:45.226 }, 00:10:45.226 "claimed": false, 00:10:45.226 "zoned": false, 00:10:45.226 "supported_io_types": { 00:10:45.226 "read": true, 00:10:45.226 "write": true, 00:10:45.226 "unmap": true, 00:10:45.226 "flush": true, 00:10:45.226 "reset": true, 00:10:45.226 "nvme_admin": true, 00:10:45.226 "nvme_io": true, 00:10:45.226 "nvme_io_md": false, 00:10:45.226 "write_zeroes": true, 00:10:45.226 "zcopy": false, 00:10:45.226 "get_zone_info": false, 00:10:45.226 "zone_management": false, 00:10:45.226 "zone_append": false, 00:10:45.226 "compare": true, 00:10:45.226 "compare_and_write": true, 00:10:45.226 "abort": true, 00:10:45.226 "seek_hole": false, 00:10:45.227 "seek_data": false, 00:10:45.227 "copy": true, 00:10:45.227 "nvme_iov_md": false 00:10:45.227 }, 00:10:45.227 "memory_domains": [ 00:10:45.227 { 00:10:45.227 "dma_device_id": "system", 00:10:45.227 "dma_device_type": 1 00:10:45.227 } 00:10:45.227 ], 00:10:45.227 "driver_specific": { 00:10:45.227 "nvme": [ 00:10:45.227 { 00:10:45.227 "trid": { 00:10:45.227 "trtype": "TCP", 00:10:45.227 "adrfam": "IPv4", 00:10:45.227 "traddr": "10.0.0.2", 00:10:45.227 "trsvcid": "4420", 00:10:45.227 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:45.227 }, 00:10:45.227 "ctrlr_data": { 00:10:45.227 "cntlid": 1, 00:10:45.227 "vendor_id": "0x8086", 00:10:45.227 "model_number": "SPDK bdev Controller", 00:10:45.227 "serial_number": "SPDK0", 00:10:45.227 "firmware_revision": "25.01", 00:10:45.227 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:45.227 "oacs": { 00:10:45.227 "security": 0, 00:10:45.227 "format": 0, 00:10:45.227 "firmware": 0, 00:10:45.227 "ns_manage": 0 00:10:45.227 }, 00:10:45.227 "multi_ctrlr": true, 00:10:45.227 "ana_reporting": false 00:10:45.227 }, 00:10:45.227 "vs": { 00:10:45.227 "nvme_version": "1.3" 00:10:45.227 }, 00:10:45.227 "ns_data": { 00:10:45.227 "id": 1, 00:10:45.227 "can_share": true 00:10:45.227 } 00:10:45.227 } 00:10:45.227 ], 00:10:45.227 "mp_policy": "active_passive" 00:10:45.227 } 00:10:45.227 } 00:10:45.227 ] 00:10:45.227 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1716110 00:10:45.227 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:45.227 13:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:45.227 Running I/O for 10 seconds... 00:10:46.161 Latency(us) 00:10:46.161 [2024-10-07T11:21:27.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:46.161 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.161 Nvme0n1 : 1.00 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:10:46.161 [2024-10-07T11:21:27.873Z] =================================================================================================================== 00:10:46.161 [2024-10-07T11:21:27.873Z] Total : 15241.00 59.54 0.00 0.00 0.00 0.00 0.00 00:10:46.161 00:10:47.095 13:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:47.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.353 Nvme0n1 : 2.00 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:10:47.353 [2024-10-07T11:21:29.065Z] =================================================================================================================== 00:10:47.353 [2024-10-07T11:21:29.065Z] Total : 15431.00 60.28 0.00 0.00 0.00 0.00 0.00 00:10:47.353 00:10:47.353 true 00:10:47.353 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:47.353 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:47.612 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:47.612 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:47.612 13:21:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1716110 00:10:48.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.178 Nvme0n1 : 3.00 15516.00 60.61 0.00 0.00 0.00 0.00 0.00 00:10:48.178 [2024-10-07T11:21:29.890Z] =================================================================================================================== 00:10:48.178 [2024-10-07T11:21:29.890Z] Total : 15516.00 60.61 0.00 0.00 0.00 0.00 0.00 00:10:48.178 00:10:49.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.552 Nvme0n1 : 4.00 15558.50 60.78 0.00 0.00 0.00 0.00 0.00 00:10:49.552 [2024-10-07T11:21:31.264Z] =================================================================================================================== 00:10:49.552 [2024-10-07T11:21:31.264Z] Total : 15558.50 60.78 0.00 0.00 0.00 0.00 0.00 00:10:49.552 00:10:50.488 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.488 Nvme0n1 : 5.00 15647.20 61.12 0.00 0.00 0.00 0.00 0.00 00:10:50.488 [2024-10-07T11:21:32.200Z] =================================================================================================================== 00:10:50.488 [2024-10-07T11:21:32.200Z] Total : 15647.20 61.12 0.00 0.00 0.00 0.00 0.00 00:10:50.488 00:10:51.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.423 Nvme0n1 : 6.00 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:10:51.423 [2024-10-07T11:21:33.135Z] =================================================================================================================== 00:10:51.423 [2024-10-07T11:21:33.135Z] Total : 15727.50 61.44 0.00 0.00 0.00 0.00 0.00 00:10:51.423 00:10:52.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.356 Nvme0n1 : 7.00 15766.71 61.59 0.00 0.00 0.00 0.00 0.00 00:10:52.356 [2024-10-07T11:21:34.068Z] =================================================================================================================== 00:10:52.356 [2024-10-07T11:21:34.068Z] Total : 15766.71 61.59 0.00 0.00 0.00 0.00 0.00 00:10:52.356 00:10:53.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.291 Nvme0n1 : 8.00 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:10:53.291 [2024-10-07T11:21:35.003Z] =================================================================================================================== 00:10:53.291 [2024-10-07T11:21:35.003Z] Total : 15812.00 61.77 0.00 0.00 0.00 0.00 0.00 00:10:53.291 00:10:54.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.226 Nvme0n1 : 9.00 15840.22 61.88 0.00 0.00 0.00 0.00 0.00 00:10:54.226 [2024-10-07T11:21:35.938Z] =================================================================================================================== 00:10:54.226 [2024-10-07T11:21:35.938Z] Total : 15840.22 61.88 0.00 0.00 0.00 0.00 0.00 00:10:54.226 00:10:55.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.160 Nvme0n1 : 10.00 15866.10 61.98 0.00 0.00 0.00 0.00 0.00 00:10:55.160 [2024-10-07T11:21:36.872Z] =================================================================================================================== 00:10:55.160 [2024-10-07T11:21:36.872Z] Total : 15866.10 61.98 0.00 0.00 0.00 0.00 0.00 00:10:55.160 00:10:55.160 00:10:55.160 Latency(us) 00:10:55.160 [2024-10-07T11:21:36.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.160 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:55.160 Nvme0n1 : 10.00 15872.74 62.00 0.00 0.00 8059.47 4320.52 16117.00 00:10:55.160 [2024-10-07T11:21:36.872Z] =================================================================================================================== 00:10:55.160 [2024-10-07T11:21:36.872Z] Total : 15872.74 62.00 0.00 0.00 8059.47 4320.52 16117.00 00:10:55.160 { 00:10:55.160 "results": [ 00:10:55.160 { 00:10:55.160 "job": "Nvme0n1", 00:10:55.160 "core_mask": "0x2", 00:10:55.160 "workload": "randwrite", 00:10:55.160 "status": "finished", 00:10:55.160 "queue_depth": 128, 00:10:55.160 "io_size": 4096, 00:10:55.160 "runtime": 10.003881, 00:10:55.160 "iops": 15872.739789687623, 00:10:55.160 "mibps": 62.002889803467276, 00:10:55.160 "io_failed": 0, 00:10:55.160 "io_timeout": 0, 00:10:55.160 "avg_latency_us": 8059.4687309201145, 00:10:55.160 "min_latency_us": 4320.521481481482, 00:10:55.160 "max_latency_us": 16117.001481481482 00:10:55.160 } 00:10:55.160 ], 00:10:55.160 "core_count": 1 00:10:55.160 } 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1715982 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 1715982 ']' 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 1715982 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1715982 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1715982' 00:10:55.419 killing process with pid 1715982 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 1715982 00:10:55.419 Received shutdown signal, test time was about 10.000000 seconds 00:10:55.419 00:10:55.419 Latency(us) 00:10:55.419 [2024-10-07T11:21:37.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.419 [2024-10-07T11:21:37.131Z] =================================================================================================================== 00:10:55.419 [2024-10-07T11:21:37.131Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:55.419 13:21:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 1715982 00:10:55.677 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.934 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:56.192 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:56.192 13:21:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:56.450 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:56.450 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:56.450 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:56.709 [2024-10-07 13:21:38.264498] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:56.709 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:56.967 request: 00:10:56.967 { 00:10:56.967 "uuid": "5db2887f-f87e-48fe-b849-11f2f20ae88e", 00:10:56.967 "method": "bdev_lvol_get_lvstores", 00:10:56.967 "req_id": 1 00:10:56.967 } 00:10:56.967 Got JSON-RPC error response 00:10:56.967 response: 00:10:56.967 { 00:10:56.967 "code": -19, 00:10:56.967 "message": "No such device" 00:10:56.967 } 00:10:56.967 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:10:56.967 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:56.967 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:56.967 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:56.967 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:57.225 aio_bdev 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4449560b-85b7-4df8-8fe8-5c42aa1706cf 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=4449560b-85b7-4df8-8fe8-5c42aa1706cf 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:57.225 13:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:57.484 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4449560b-85b7-4df8-8fe8-5c42aa1706cf -t 2000 00:10:57.742 [ 00:10:57.742 { 00:10:57.742 "name": "4449560b-85b7-4df8-8fe8-5c42aa1706cf", 00:10:57.742 "aliases": [ 00:10:57.742 "lvs/lvol" 00:10:57.742 ], 00:10:57.742 "product_name": "Logical Volume", 00:10:57.742 "block_size": 4096, 00:10:57.742 "num_blocks": 38912, 00:10:57.742 "uuid": "4449560b-85b7-4df8-8fe8-5c42aa1706cf", 00:10:57.742 "assigned_rate_limits": { 00:10:57.742 "rw_ios_per_sec": 0, 00:10:57.742 "rw_mbytes_per_sec": 0, 00:10:57.742 "r_mbytes_per_sec": 0, 00:10:57.742 "w_mbytes_per_sec": 0 00:10:57.742 }, 00:10:57.742 "claimed": false, 00:10:57.742 "zoned": false, 00:10:57.742 "supported_io_types": { 00:10:57.742 "read": true, 00:10:57.742 "write": true, 00:10:57.742 "unmap": true, 00:10:57.742 "flush": false, 00:10:57.742 "reset": true, 00:10:57.742 "nvme_admin": false, 00:10:57.742 "nvme_io": false, 00:10:57.742 "nvme_io_md": false, 00:10:57.742 "write_zeroes": true, 00:10:57.742 "zcopy": false, 00:10:57.742 "get_zone_info": false, 00:10:57.742 "zone_management": false, 00:10:57.742 "zone_append": false, 00:10:57.742 "compare": false, 00:10:57.742 "compare_and_write": false, 00:10:57.742 "abort": false, 00:10:57.742 "seek_hole": true, 00:10:57.742 "seek_data": true, 00:10:57.742 "copy": false, 00:10:57.742 "nvme_iov_md": false 00:10:57.742 }, 00:10:57.742 "driver_specific": { 00:10:57.742 "lvol": { 00:10:57.742 "lvol_store_uuid": "5db2887f-f87e-48fe-b849-11f2f20ae88e", 00:10:57.742 "base_bdev": "aio_bdev", 00:10:57.742 "thin_provision": false, 00:10:57.742 "num_allocated_clusters": 38, 00:10:57.742 "snapshot": false, 00:10:57.742 "clone": false, 00:10:57.742 "esnap_clone": false 00:10:57.742 } 00:10:57.742 } 00:10:57.742 } 00:10:57.742 ] 00:10:57.742 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:10:57.742 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:57.742 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:58.001 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:58.001 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:58.001 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:58.259 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:58.259 13:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4449560b-85b7-4df8-8fe8-5c42aa1706cf 00:10:58.517 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5db2887f-f87e-48fe-b849-11f2f20ae88e 00:10:58.775 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:59.341 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:59.341 00:10:59.341 real 0m17.720s 00:10:59.341 user 0m17.158s 00:10:59.341 sys 0m1.847s 00:10:59.341 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:59.341 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:59.341 ************************************ 00:10:59.341 END TEST lvs_grow_clean 00:10:59.341 ************************************ 00:10:59.341 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:59.342 ************************************ 00:10:59.342 START TEST lvs_grow_dirty 00:10:59.342 ************************************ 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:59.342 13:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:59.600 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:59.600 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:59.859 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:10:59.859 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:10:59.859 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:00.117 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:00.117 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:00.117 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b lvol 150 00:11:00.375 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:00.375 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:00.375 13:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:00.632 [2024-10-07 13:21:42.215204] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:00.632 [2024-10-07 13:21:42.215299] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:00.632 true 00:11:00.633 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:00.633 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:00.891 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:00.891 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:01.149 13:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:01.407 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:01.665 [2024-10-07 13:21:43.286449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:01.665 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1718070 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1718070 /var/tmp/bdevperf.sock 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1718070 ']' 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:01.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:01.923 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.923 [2024-10-07 13:21:43.612520] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:01.923 [2024-10-07 13:21:43.612603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1718070 ] 00:11:02.182 [2024-10-07 13:21:43.668494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.182 [2024-10-07 13:21:43.777982] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.182 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.182 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:02.182 13:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:02.747 Nvme0n1 00:11:02.747 13:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:03.005 [ 00:11:03.005 { 00:11:03.005 "name": "Nvme0n1", 00:11:03.005 "aliases": [ 00:11:03.005 "80e7a2e0-c1f4-4867-8586-7c68070bef85" 00:11:03.005 ], 00:11:03.005 "product_name": "NVMe disk", 00:11:03.005 "block_size": 4096, 00:11:03.005 "num_blocks": 38912, 00:11:03.005 "uuid": "80e7a2e0-c1f4-4867-8586-7c68070bef85", 00:11:03.005 "numa_id": 0, 00:11:03.005 "assigned_rate_limits": { 00:11:03.005 "rw_ios_per_sec": 0, 00:11:03.005 "rw_mbytes_per_sec": 0, 00:11:03.005 "r_mbytes_per_sec": 0, 00:11:03.005 "w_mbytes_per_sec": 0 00:11:03.005 }, 00:11:03.005 "claimed": false, 00:11:03.005 "zoned": false, 00:11:03.005 "supported_io_types": { 00:11:03.005 "read": true, 00:11:03.005 "write": true, 00:11:03.005 "unmap": true, 00:11:03.005 "flush": true, 00:11:03.005 "reset": true, 00:11:03.005 "nvme_admin": true, 00:11:03.005 "nvme_io": true, 00:11:03.005 "nvme_io_md": false, 00:11:03.005 "write_zeroes": true, 00:11:03.005 "zcopy": false, 00:11:03.005 "get_zone_info": false, 00:11:03.005 "zone_management": false, 00:11:03.005 "zone_append": false, 00:11:03.005 "compare": true, 00:11:03.005 "compare_and_write": true, 00:11:03.005 "abort": true, 00:11:03.005 "seek_hole": false, 00:11:03.005 "seek_data": false, 00:11:03.005 "copy": true, 00:11:03.005 "nvme_iov_md": false 00:11:03.005 }, 00:11:03.005 "memory_domains": [ 00:11:03.005 { 00:11:03.005 "dma_device_id": "system", 00:11:03.005 "dma_device_type": 1 00:11:03.005 } 00:11:03.005 ], 00:11:03.005 "driver_specific": { 00:11:03.005 "nvme": [ 00:11:03.005 { 00:11:03.005 "trid": { 00:11:03.005 "trtype": "TCP", 00:11:03.005 "adrfam": "IPv4", 00:11:03.005 "traddr": "10.0.0.2", 00:11:03.005 "trsvcid": "4420", 00:11:03.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:03.005 }, 00:11:03.005 "ctrlr_data": { 00:11:03.005 "cntlid": 1, 00:11:03.005 "vendor_id": "0x8086", 00:11:03.005 "model_number": "SPDK bdev Controller", 00:11:03.005 "serial_number": "SPDK0", 00:11:03.005 "firmware_revision": "25.01", 00:11:03.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:03.005 "oacs": { 00:11:03.005 "security": 0, 00:11:03.005 "format": 0, 00:11:03.005 "firmware": 0, 00:11:03.005 "ns_manage": 0 00:11:03.005 }, 00:11:03.005 "multi_ctrlr": true, 00:11:03.005 "ana_reporting": false 00:11:03.005 }, 00:11:03.005 "vs": { 00:11:03.005 "nvme_version": "1.3" 00:11:03.005 }, 00:11:03.005 "ns_data": { 00:11:03.005 "id": 1, 00:11:03.005 "can_share": true 00:11:03.005 } 00:11:03.005 } 00:11:03.005 ], 00:11:03.005 "mp_policy": "active_passive" 00:11:03.005 } 00:11:03.005 } 00:11:03.005 ] 00:11:03.005 13:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1718200 00:11:03.005 13:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:03.005 13:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:03.263 Running I/O for 10 seconds... 00:11:04.198 Latency(us) 00:11:04.198 [2024-10-07T11:21:45.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.199 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.199 Nvme0n1 : 1.00 15433.00 60.29 0.00 0.00 0.00 0.00 0.00 00:11:04.199 [2024-10-07T11:21:45.911Z] =================================================================================================================== 00:11:04.199 [2024-10-07T11:21:45.911Z] Total : 15433.00 60.29 0.00 0.00 0.00 0.00 0.00 00:11:04.199 00:11:05.132 13:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:05.132 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.132 Nvme0n1 : 2.00 15590.50 60.90 0.00 0.00 0.00 0.00 0.00 00:11:05.132 [2024-10-07T11:21:46.844Z] =================================================================================================================== 00:11:05.132 [2024-10-07T11:21:46.844Z] Total : 15590.50 60.90 0.00 0.00 0.00 0.00 0.00 00:11:05.132 00:11:05.389 true 00:11:05.389 13:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:05.390 13:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:05.647 13:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:05.647 13:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:05.647 13:21:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1718200 00:11:06.214 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.214 Nvme0n1 : 3.00 15665.67 61.19 0.00 0.00 0.00 0.00 0.00 00:11:06.214 [2024-10-07T11:21:47.926Z] =================================================================================================================== 00:11:06.214 [2024-10-07T11:21:47.926Z] Total : 15665.67 61.19 0.00 0.00 0.00 0.00 0.00 00:11:06.214 00:11:07.159 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.159 Nvme0n1 : 4.00 15790.75 61.68 0.00 0.00 0.00 0.00 0.00 00:11:07.159 [2024-10-07T11:21:48.871Z] =================================================================================================================== 00:11:07.159 [2024-10-07T11:21:48.871Z] Total : 15790.75 61.68 0.00 0.00 0.00 0.00 0.00 00:11:07.159 00:11:08.092 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.092 Nvme0n1 : 5.00 15872.60 62.00 0.00 0.00 0.00 0.00 0.00 00:11:08.092 [2024-10-07T11:21:49.804Z] =================================================================================================================== 00:11:08.092 [2024-10-07T11:21:49.804Z] Total : 15872.60 62.00 0.00 0.00 0.00 0.00 0.00 00:11:08.092 00:11:09.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.464 Nvme0n1 : 6.00 15866.00 61.98 0.00 0.00 0.00 0.00 0.00 00:11:09.464 [2024-10-07T11:21:51.176Z] =================================================================================================================== 00:11:09.464 [2024-10-07T11:21:51.176Z] Total : 15866.00 61.98 0.00 0.00 0.00 0.00 0.00 00:11:09.464 00:11:10.398 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.398 Nvme0n1 : 7.00 15919.29 62.18 0.00 0.00 0.00 0.00 0.00 00:11:10.398 [2024-10-07T11:21:52.110Z] =================================================================================================================== 00:11:10.398 [2024-10-07T11:21:52.110Z] Total : 15919.29 62.18 0.00 0.00 0.00 0.00 0.00 00:11:10.398 00:11:11.358 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.358 Nvme0n1 : 8.00 15966.12 62.37 0.00 0.00 0.00 0.00 0.00 00:11:11.358 [2024-10-07T11:21:53.070Z] =================================================================================================================== 00:11:11.358 [2024-10-07T11:21:53.070Z] Total : 15966.12 62.37 0.00 0.00 0.00 0.00 0.00 00:11:11.358 00:11:12.291 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.291 Nvme0n1 : 9.00 16006.22 62.52 0.00 0.00 0.00 0.00 0.00 00:11:12.291 [2024-10-07T11:21:54.003Z] =================================================================================================================== 00:11:12.291 [2024-10-07T11:21:54.003Z] Total : 16006.22 62.52 0.00 0.00 0.00 0.00 0.00 00:11:12.291 00:11:13.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.225 Nvme0n1 : 10.00 16038.30 62.65 0.00 0.00 0.00 0.00 0.00 00:11:13.225 [2024-10-07T11:21:54.937Z] =================================================================================================================== 00:11:13.225 [2024-10-07T11:21:54.937Z] Total : 16038.30 62.65 0.00 0.00 0.00 0.00 0.00 00:11:13.225 00:11:13.225 00:11:13.225 Latency(us) 00:11:13.225 [2024-10-07T11:21:54.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.225 Nvme0n1 : 10.00 16044.33 62.67 0.00 0.00 7973.31 3106.89 15340.28 00:11:13.225 [2024-10-07T11:21:54.937Z] =================================================================================================================== 00:11:13.225 [2024-10-07T11:21:54.937Z] Total : 16044.33 62.67 0.00 0.00 7973.31 3106.89 15340.28 00:11:13.225 { 00:11:13.225 "results": [ 00:11:13.225 { 00:11:13.225 "job": "Nvme0n1", 00:11:13.225 "core_mask": "0x2", 00:11:13.225 "workload": "randwrite", 00:11:13.225 "status": "finished", 00:11:13.225 "queue_depth": 128, 00:11:13.225 "io_size": 4096, 00:11:13.225 "runtime": 10.004221, 00:11:13.225 "iops": 16044.327689282354, 00:11:13.225 "mibps": 62.673155036259196, 00:11:13.225 "io_failed": 0, 00:11:13.225 "io_timeout": 0, 00:11:13.225 "avg_latency_us": 7973.312123295116, 00:11:13.225 "min_latency_us": 3106.8918518518517, 00:11:13.225 "max_latency_us": 15340.278518518518 00:11:13.225 } 00:11:13.225 ], 00:11:13.225 "core_count": 1 00:11:13.225 } 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1718070 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 1718070 ']' 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 1718070 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1718070 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1718070' 00:11:13.225 killing process with pid 1718070 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 1718070 00:11:13.225 Received shutdown signal, test time was about 10.000000 seconds 00:11:13.225 00:11:13.225 Latency(us) 00:11:13.225 [2024-10-07T11:21:54.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.225 [2024-10-07T11:21:54.937Z] =================================================================================================================== 00:11:13.225 [2024-10-07T11:21:54.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:13.225 13:21:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 1718070 00:11:13.484 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.741 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:14.000 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:14.000 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1715556 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1715556 00:11:14.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1715556 Killed "${NVMF_APP[@]}" "$@" 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # nvmfpid=1719482 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # waitforlisten 1719482 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 1719482 ']' 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.258 13:21:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:14.517 [2024-10-07 13:21:55.990772] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:14.517 [2024-10-07 13:21:55.990874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.517 [2024-10-07 13:21:56.057070] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.517 [2024-10-07 13:21:56.165624] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:14.517 [2024-10-07 13:21:56.165717] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:14.517 [2024-10-07 13:21:56.165746] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:14.517 [2024-10-07 13:21:56.165758] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:14.517 [2024-10-07 13:21:56.165767] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:14.517 [2024-10-07 13:21:56.166335] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.775 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:15.034 [2024-10-07 13:21:56.549565] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:15.034 [2024-10-07 13:21:56.549741] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:15.034 [2024-10-07 13:21:56.549788] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.034 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:15.293 13:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80e7a2e0-c1f4-4867-8586-7c68070bef85 -t 2000 00:11:15.551 [ 00:11:15.551 { 00:11:15.551 "name": "80e7a2e0-c1f4-4867-8586-7c68070bef85", 00:11:15.551 "aliases": [ 00:11:15.551 "lvs/lvol" 00:11:15.551 ], 00:11:15.551 "product_name": "Logical Volume", 00:11:15.551 "block_size": 4096, 00:11:15.551 "num_blocks": 38912, 00:11:15.551 "uuid": "80e7a2e0-c1f4-4867-8586-7c68070bef85", 00:11:15.551 "assigned_rate_limits": { 00:11:15.551 "rw_ios_per_sec": 0, 00:11:15.551 "rw_mbytes_per_sec": 0, 00:11:15.551 "r_mbytes_per_sec": 0, 00:11:15.551 "w_mbytes_per_sec": 0 00:11:15.551 }, 00:11:15.551 "claimed": false, 00:11:15.551 "zoned": false, 00:11:15.551 "supported_io_types": { 00:11:15.551 "read": true, 00:11:15.551 "write": true, 00:11:15.551 "unmap": true, 00:11:15.551 "flush": false, 00:11:15.551 "reset": true, 00:11:15.551 "nvme_admin": false, 00:11:15.551 "nvme_io": false, 00:11:15.551 "nvme_io_md": false, 00:11:15.551 "write_zeroes": true, 00:11:15.551 "zcopy": false, 00:11:15.551 "get_zone_info": false, 00:11:15.551 "zone_management": false, 00:11:15.551 "zone_append": false, 00:11:15.551 "compare": false, 00:11:15.551 "compare_and_write": false, 00:11:15.551 "abort": false, 00:11:15.551 "seek_hole": true, 00:11:15.551 "seek_data": true, 00:11:15.551 "copy": false, 00:11:15.551 "nvme_iov_md": false 00:11:15.551 }, 00:11:15.551 "driver_specific": { 00:11:15.551 "lvol": { 00:11:15.551 "lvol_store_uuid": "a5612dda-e466-41cf-a7f4-15e3bd67ff1b", 00:11:15.551 "base_bdev": "aio_bdev", 00:11:15.551 "thin_provision": false, 00:11:15.551 "num_allocated_clusters": 38, 00:11:15.551 "snapshot": false, 00:11:15.551 "clone": false, 00:11:15.551 "esnap_clone": false 00:11:15.551 } 00:11:15.551 } 00:11:15.551 } 00:11:15.551 ] 00:11:15.551 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:15.551 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:15.551 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:15.826 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:15.826 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:15.826 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:16.101 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:16.101 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:16.365 [2024-10-07 13:21:57.927347] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:16.365 13:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:16.624 request: 00:11:16.624 { 00:11:16.624 "uuid": "a5612dda-e466-41cf-a7f4-15e3bd67ff1b", 00:11:16.624 "method": "bdev_lvol_get_lvstores", 00:11:16.624 "req_id": 1 00:11:16.624 } 00:11:16.624 Got JSON-RPC error response 00:11:16.624 response: 00:11:16.624 { 00:11:16.624 "code": -19, 00:11:16.624 "message": "No such device" 00:11:16.624 } 00:11:16.624 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:16.624 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:16.624 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:16.624 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:16.624 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:16.882 aio_bdev 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:16.882 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:17.139 13:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 80e7a2e0-c1f4-4867-8586-7c68070bef85 -t 2000 00:11:17.398 [ 00:11:17.398 { 00:11:17.398 "name": "80e7a2e0-c1f4-4867-8586-7c68070bef85", 00:11:17.398 "aliases": [ 00:11:17.398 "lvs/lvol" 00:11:17.398 ], 00:11:17.398 "product_name": "Logical Volume", 00:11:17.398 "block_size": 4096, 00:11:17.398 "num_blocks": 38912, 00:11:17.398 "uuid": "80e7a2e0-c1f4-4867-8586-7c68070bef85", 00:11:17.398 "assigned_rate_limits": { 00:11:17.398 "rw_ios_per_sec": 0, 00:11:17.398 "rw_mbytes_per_sec": 0, 00:11:17.398 "r_mbytes_per_sec": 0, 00:11:17.398 "w_mbytes_per_sec": 0 00:11:17.398 }, 00:11:17.398 "claimed": false, 00:11:17.398 "zoned": false, 00:11:17.398 "supported_io_types": { 00:11:17.398 "read": true, 00:11:17.398 "write": true, 00:11:17.398 "unmap": true, 00:11:17.398 "flush": false, 00:11:17.398 "reset": true, 00:11:17.398 "nvme_admin": false, 00:11:17.398 "nvme_io": false, 00:11:17.398 "nvme_io_md": false, 00:11:17.398 "write_zeroes": true, 00:11:17.398 "zcopy": false, 00:11:17.398 "get_zone_info": false, 00:11:17.398 "zone_management": false, 00:11:17.398 "zone_append": false, 00:11:17.398 "compare": false, 00:11:17.398 "compare_and_write": false, 00:11:17.398 "abort": false, 00:11:17.398 "seek_hole": true, 00:11:17.398 "seek_data": true, 00:11:17.398 "copy": false, 00:11:17.398 "nvme_iov_md": false 00:11:17.398 }, 00:11:17.398 "driver_specific": { 00:11:17.398 "lvol": { 00:11:17.398 "lvol_store_uuid": "a5612dda-e466-41cf-a7f4-15e3bd67ff1b", 00:11:17.398 "base_bdev": "aio_bdev", 00:11:17.398 "thin_provision": false, 00:11:17.398 "num_allocated_clusters": 38, 00:11:17.398 "snapshot": false, 00:11:17.398 "clone": false, 00:11:17.398 "esnap_clone": false 00:11:17.398 } 00:11:17.398 } 00:11:17.398 } 00:11:17.398 ] 00:11:17.398 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:11:17.398 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:17.398 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:17.656 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:17.656 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:17.656 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:17.915 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:17.915 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 80e7a2e0-c1f4-4867-8586-7c68070bef85 00:11:18.173 13:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5612dda-e466-41cf-a7f4-15e3bd67ff1b 00:11:18.740 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:18.997 00:11:18.997 real 0m19.684s 00:11:18.997 user 0m48.827s 00:11:18.997 sys 0m4.845s 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:18.997 ************************************ 00:11:18.997 END TEST lvs_grow_dirty 00:11:18.997 ************************************ 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:18.997 nvmf_trace.0 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:18.997 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:18.998 rmmod nvme_tcp 00:11:18.998 rmmod nvme_fabrics 00:11:18.998 rmmod nvme_keyring 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@515 -- # '[' -n 1719482 ']' 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # killprocess 1719482 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 1719482 ']' 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 1719482 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1719482 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1719482' 00:11:18.998 killing process with pid 1719482 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 1719482 00:11:18.998 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 1719482 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-save 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@789 -- # iptables-restore 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.257 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.258 13:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.797 13:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:21.797 00:11:21.797 real 0m43.012s 00:11:21.797 user 1m12.161s 00:11:21.797 sys 0m8.696s 00:11:21.797 13:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.797 13:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:21.797 ************************************ 00:11:21.797 END TEST nvmf_lvs_grow 00:11:21.797 ************************************ 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:21.797 ************************************ 00:11:21.797 START TEST nvmf_bdev_io_wait 00:11:21.797 ************************************ 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:21.797 * Looking for test storage... 00:11:21.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:21.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.797 --rc genhtml_branch_coverage=1 00:11:21.797 --rc genhtml_function_coverage=1 00:11:21.797 --rc genhtml_legend=1 00:11:21.797 --rc geninfo_all_blocks=1 00:11:21.797 --rc geninfo_unexecuted_blocks=1 00:11:21.797 00:11:21.797 ' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:21.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.797 --rc genhtml_branch_coverage=1 00:11:21.797 --rc genhtml_function_coverage=1 00:11:21.797 --rc genhtml_legend=1 00:11:21.797 --rc geninfo_all_blocks=1 00:11:21.797 --rc geninfo_unexecuted_blocks=1 00:11:21.797 00:11:21.797 ' 00:11:21.797 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:21.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.797 --rc genhtml_branch_coverage=1 00:11:21.797 --rc genhtml_function_coverage=1 00:11:21.797 --rc genhtml_legend=1 00:11:21.797 --rc geninfo_all_blocks=1 00:11:21.797 --rc geninfo_unexecuted_blocks=1 00:11:21.797 00:11:21.797 ' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:21.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:21.798 --rc genhtml_branch_coverage=1 00:11:21.798 --rc genhtml_function_coverage=1 00:11:21.798 --rc genhtml_legend=1 00:11:21.798 --rc geninfo_all_blocks=1 00:11:21.798 --rc geninfo_unexecuted_blocks=1 00:11:21.798 00:11:21.798 ' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:21.798 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:21.798 13:22:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:23.705 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:23.705 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:23.705 Found net devices under 0000:09:00.0: cvl_0_0 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:23.705 Found net devices under 0000:09:00.1: cvl_0_1 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # is_hw=yes 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.705 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:11:23.706 00:11:23.706 --- 10.0.0.2 ping statistics --- 00:11:23.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.706 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:11:23.706 00:11:23.706 --- 10.0.0.1 ping statistics --- 00:11:23.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.706 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # return 0 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # nvmfpid=1721908 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # waitforlisten 1721908 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 1721908 ']' 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:23.706 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.964 [2024-10-07 13:22:05.457270] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:23.964 [2024-10-07 13:22:05.457356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.964 [2024-10-07 13:22:05.517243] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.964 [2024-10-07 13:22:05.619197] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.964 [2024-10-07 13:22:05.619261] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.964 [2024-10-07 13:22:05.619274] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.964 [2024-10-07 13:22:05.619285] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.964 [2024-10-07 13:22:05.619308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.964 [2024-10-07 13:22:05.620723] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.964 [2024-10-07 13:22:05.620793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.964 [2024-10-07 13:22:05.620861] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.964 [2024-10-07 13:22:05.620858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.964 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:23.964 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:11:23.964 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:23.964 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.964 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 [2024-10-07 13:22:05.774343] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 Malloc0 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.223 [2024-10-07 13:22:05.838325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1722022 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1722025 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:24.223 { 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme$subsystem", 00:11:24.223 "trtype": "$TEST_TRANSPORT", 00:11:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "$NVMF_PORT", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.223 "hdgst": ${hdgst:-false}, 00:11:24.223 "ddgst": ${ddgst:-false} 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.223 } 00:11:24.223 EOF 00:11:24.223 )") 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1722028 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1722032 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:24.223 { 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme$subsystem", 00:11:24.223 "trtype": "$TEST_TRANSPORT", 00:11:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "$NVMF_PORT", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.223 "hdgst": ${hdgst:-false}, 00:11:24.223 "ddgst": ${ddgst:-false} 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.223 } 00:11:24.223 EOF 00:11:24.223 )") 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:24.223 { 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme$subsystem", 00:11:24.223 "trtype": "$TEST_TRANSPORT", 00:11:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "$NVMF_PORT", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.223 "hdgst": ${hdgst:-false}, 00:11:24.223 "ddgst": ${ddgst:-false} 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.223 } 00:11:24.223 EOF 00:11:24.223 )") 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # config=() 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # local subsystem config 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:24.223 { 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme$subsystem", 00:11:24.223 "trtype": "$TEST_TRANSPORT", 00:11:24.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "$NVMF_PORT", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.223 "hdgst": ${hdgst:-false}, 00:11:24.223 "ddgst": ${ddgst:-false} 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.223 } 00:11:24.223 EOF 00:11:24.223 )") 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1722022 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # cat 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme1", 00:11:24.223 "trtype": "tcp", 00:11:24.223 "traddr": "10.0.0.2", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "4420", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.223 "hdgst": false, 00:11:24.223 "ddgst": false 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.223 }' 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # jq . 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:24.223 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:24.223 "params": { 00:11:24.223 "name": "Nvme1", 00:11:24.223 "trtype": "tcp", 00:11:24.223 "traddr": "10.0.0.2", 00:11:24.223 "adrfam": "ipv4", 00:11:24.223 "trsvcid": "4420", 00:11:24.223 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.223 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.223 "hdgst": false, 00:11:24.223 "ddgst": false 00:11:24.223 }, 00:11:24.223 "method": "bdev_nvme_attach_controller" 00:11:24.224 }' 00:11:24.224 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:24.224 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:24.224 "params": { 00:11:24.224 "name": "Nvme1", 00:11:24.224 "trtype": "tcp", 00:11:24.224 "traddr": "10.0.0.2", 00:11:24.224 "adrfam": "ipv4", 00:11:24.224 "trsvcid": "4420", 00:11:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.224 "hdgst": false, 00:11:24.224 "ddgst": false 00:11:24.224 }, 00:11:24.224 "method": "bdev_nvme_attach_controller" 00:11:24.224 }' 00:11:24.224 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@583 -- # IFS=, 00:11:24.224 13:22:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:24.224 "params": { 00:11:24.224 "name": "Nvme1", 00:11:24.224 "trtype": "tcp", 00:11:24.224 "traddr": "10.0.0.2", 00:11:24.224 "adrfam": "ipv4", 00:11:24.224 "trsvcid": "4420", 00:11:24.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.224 "hdgst": false, 00:11:24.224 "ddgst": false 00:11:24.224 }, 00:11:24.224 "method": "bdev_nvme_attach_controller" 00:11:24.224 }' 00:11:24.224 [2024-10-07 13:22:05.889558] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:24.224 [2024-10-07 13:22:05.889558] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:24.224 [2024-10-07 13:22:05.889611] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:24.224 [2024-10-07 13:22:05.889611] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:24.224 [2024-10-07 13:22:05.889638] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-10-07 13:22:05.889638] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:24.224 --proc-type=auto ] 00:11:24.224 [2024-10-07 13:22:05.889705] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:24.224 [2024-10-07 13:22:05.889708] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:24.482 [2024-10-07 13:22:06.069625] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.482 [2024-10-07 13:22:06.169927] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.482 [2024-10-07 13:22:06.172591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.740 [2024-10-07 13:22:06.245940] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.740 [2024-10-07 13:22:06.276263] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.740 [2024-10-07 13:22:06.320866] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.740 [2024-10-07 13:22:06.341758] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.740 [2024-10-07 13:22:06.412143] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 7 00:11:24.998 Running I/O for 1 seconds... 00:11:25.257 Running I/O for 1 seconds... 00:11:25.257 Running I/O for 1 seconds... 00:11:25.257 Running I/O for 1 seconds... 00:11:26.193 8860.00 IOPS, 34.61 MiB/s 00:11:26.193 Latency(us) 00:11:26.193 [2024-10-07T11:22:07.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.193 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:26.193 Nvme1n1 : 1.01 8917.59 34.83 0.00 0.00 14287.97 7330.32 20194.80 00:11:26.193 [2024-10-07T11:22:07.905Z] =================================================================================================================== 00:11:26.193 [2024-10-07T11:22:07.905Z] Total : 8917.59 34.83 0.00 0.00 14287.97 7330.32 20194.80 00:11:26.193 7883.00 IOPS, 30.79 MiB/s 00:11:26.193 Latency(us) 00:11:26.193 [2024-10-07T11:22:07.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.193 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:26.193 Nvme1n1 : 1.01 7926.36 30.96 0.00 0.00 16054.67 9320.68 25631.86 00:11:26.193 [2024-10-07T11:22:07.905Z] =================================================================================================================== 00:11:26.193 [2024-10-07T11:22:07.905Z] Total : 7926.36 30.96 0.00 0.00 16054.67 9320.68 25631.86 00:11:26.193 9680.00 IOPS, 37.81 MiB/s 00:11:26.193 Latency(us) 00:11:26.193 [2024-10-07T11:22:07.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.193 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:26.193 Nvme1n1 : 1.01 9754.61 38.10 0.00 0.00 13074.73 2524.35 19320.98 00:11:26.193 [2024-10-07T11:22:07.905Z] =================================================================================================================== 00:11:26.193 [2024-10-07T11:22:07.905Z] Total : 9754.61 38.10 0.00 0.00 13074.73 2524.35 19320.98 00:11:26.193 187840.00 IOPS, 733.75 MiB/s 00:11:26.193 Latency(us) 00:11:26.193 [2024-10-07T11:22:07.905Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.193 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:26.193 Nvme1n1 : 1.00 187473.80 732.32 0.00 0.00 679.06 320.09 1929.67 00:11:26.193 [2024-10-07T11:22:07.905Z] =================================================================================================================== 00:11:26.193 [2024-10-07T11:22:07.905Z] Total : 187473.80 732.32 0.00 0.00 679.06 320.09 1929.67 00:11:26.451 13:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1722025 00:11:26.451 13:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1722028 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1722032 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.709 rmmod nvme_tcp 00:11:26.709 rmmod nvme_fabrics 00:11:26.709 rmmod nvme_keyring 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@515 -- # '[' -n 1721908 ']' 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # killprocess 1721908 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 1721908 ']' 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 1721908 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1721908 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1721908' 00:11:26.709 killing process with pid 1721908 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 1721908 00:11:26.709 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 1721908 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-save 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@789 -- # iptables-restore 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.967 13:22:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.874 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:28.874 00:11:28.874 real 0m7.530s 00:11:28.874 user 0m17.529s 00:11:28.874 sys 0m3.910s 00:11:28.874 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:28.874 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:28.874 ************************************ 00:11:28.874 END TEST nvmf_bdev_io_wait 00:11:28.874 ************************************ 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:29.135 ************************************ 00:11:29.135 START TEST nvmf_queue_depth 00:11:29.135 ************************************ 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:29.135 * Looking for test storage... 00:11:29.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.135 --rc genhtml_branch_coverage=1 00:11:29.135 --rc genhtml_function_coverage=1 00:11:29.135 --rc genhtml_legend=1 00:11:29.135 --rc geninfo_all_blocks=1 00:11:29.135 --rc geninfo_unexecuted_blocks=1 00:11:29.135 00:11:29.135 ' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.135 --rc genhtml_branch_coverage=1 00:11:29.135 --rc genhtml_function_coverage=1 00:11:29.135 --rc genhtml_legend=1 00:11:29.135 --rc geninfo_all_blocks=1 00:11:29.135 --rc geninfo_unexecuted_blocks=1 00:11:29.135 00:11:29.135 ' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.135 --rc genhtml_branch_coverage=1 00:11:29.135 --rc genhtml_function_coverage=1 00:11:29.135 --rc genhtml_legend=1 00:11:29.135 --rc geninfo_all_blocks=1 00:11:29.135 --rc geninfo_unexecuted_blocks=1 00:11:29.135 00:11:29.135 ' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:29.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:29.135 --rc genhtml_branch_coverage=1 00:11:29.135 --rc genhtml_function_coverage=1 00:11:29.135 --rc genhtml_legend=1 00:11:29.135 --rc geninfo_all_blocks=1 00:11:29.135 --rc geninfo_unexecuted_blocks=1 00:11:29.135 00:11:29.135 ' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.135 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:29.136 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:29.136 13:22:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:31.672 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:31.673 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:31.673 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:31.673 Found net devices under 0000:09:00.0: cvl_0_0 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:31.673 Found net devices under 0000:09:00.1: cvl_0_1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # is_hw=yes 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:31.673 13:22:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:31.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:31.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:11:31.673 00:11:31.673 --- 10.0.0.2 ping statistics --- 00:11:31.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.673 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:31.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:31.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:31.673 00:11:31.673 --- 10.0.0.1 ping statistics --- 00:11:31.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:31.673 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # return 0 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # nvmfpid=1724178 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # waitforlisten 1724178 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1724178 ']' 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.673 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.673 [2024-10-07 13:22:13.132904] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:31.673 [2024-10-07 13:22:13.133011] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.673 [2024-10-07 13:22:13.199117] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.673 [2024-10-07 13:22:13.307263] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.674 [2024-10-07 13:22:13.307332] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.674 [2024-10-07 13:22:13.307360] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.674 [2024-10-07 13:22:13.307371] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.674 [2024-10-07 13:22:13.307380] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.674 [2024-10-07 13:22:13.308008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 [2024-10-07 13:22:13.453846] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 Malloc0 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 [2024-10-07 13:22:13.510820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1724205 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1724205 /var/tmp/bdevperf.sock 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 1724205 ']' 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:31.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:31.933 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.933 [2024-10-07 13:22:13.557591] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:31.933 [2024-10-07 13:22:13.557680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1724205 ] 00:11:31.933 [2024-10-07 13:22:13.612242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.194 [2024-10-07 13:22:13.722574] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.194 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:32.194 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:11:32.194 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:32.194 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.194 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:32.454 NVMe0n1 00:11:32.454 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.454 13:22:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:32.454 Running I/O for 10 seconds... 00:11:34.774 8192.00 IOPS, 32.00 MiB/s [2024-10-07T11:22:17.425Z] 8499.00 IOPS, 33.20 MiB/s [2024-10-07T11:22:18.377Z] 8535.33 IOPS, 33.34 MiB/s [2024-10-07T11:22:19.312Z] 8683.00 IOPS, 33.92 MiB/s [2024-10-07T11:22:20.250Z] 8656.40 IOPS, 33.81 MiB/s [2024-10-07T11:22:21.190Z] 8695.67 IOPS, 33.97 MiB/s [2024-10-07T11:22:22.131Z] 8725.86 IOPS, 34.09 MiB/s [2024-10-07T11:22:23.511Z] 8705.38 IOPS, 34.01 MiB/s [2024-10-07T11:22:24.450Z] 8743.00 IOPS, 34.15 MiB/s [2024-10-07T11:22:24.450Z] 8742.60 IOPS, 34.15 MiB/s 00:11:42.738 Latency(us) 00:11:42.738 [2024-10-07T11:22:24.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.738 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:42.738 Verification LBA range: start 0x0 length 0x4000 00:11:42.738 NVMe0n1 : 10.08 8773.81 34.27 0.00 0.00 116137.30 21165.70 69128.34 00:11:42.738 [2024-10-07T11:22:24.450Z] =================================================================================================================== 00:11:42.738 [2024-10-07T11:22:24.450Z] Total : 8773.81 34.27 0.00 0.00 116137.30 21165.70 69128.34 00:11:42.738 { 00:11:42.738 "results": [ 00:11:42.738 { 00:11:42.738 "job": "NVMe0n1", 00:11:42.738 "core_mask": "0x1", 00:11:42.738 "workload": "verify", 00:11:42.738 "status": "finished", 00:11:42.738 "verify_range": { 00:11:42.738 "start": 0, 00:11:42.738 "length": 16384 00:11:42.738 }, 00:11:42.738 "queue_depth": 1024, 00:11:42.738 "io_size": 4096, 00:11:42.738 "runtime": 10.07943, 00:11:42.738 "iops": 8773.809630108051, 00:11:42.738 "mibps": 34.272693867609576, 00:11:42.738 "io_failed": 0, 00:11:42.738 "io_timeout": 0, 00:11:42.738 "avg_latency_us": 116137.30144912458, 00:11:42.738 "min_latency_us": 21165.70074074074, 00:11:42.738 "max_latency_us": 69128.34370370371 00:11:42.738 } 00:11:42.738 ], 00:11:42.738 "core_count": 1 00:11:42.738 } 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1724205 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1724205 ']' 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1724205 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724205 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724205' 00:11:42.738 killing process with pid 1724205 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1724205 00:11:42.738 Received shutdown signal, test time was about 10.000000 seconds 00:11:42.738 00:11:42.738 Latency(us) 00:11:42.738 [2024-10-07T11:22:24.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.738 [2024-10-07T11:22:24.450Z] =================================================================================================================== 00:11:42.738 [2024-10-07T11:22:24.450Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:42.738 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1724205 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.999 rmmod nvme_tcp 00:11:42.999 rmmod nvme_fabrics 00:11:42.999 rmmod nvme_keyring 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@515 -- # '[' -n 1724178 ']' 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # killprocess 1724178 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 1724178 ']' 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 1724178 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1724178 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1724178' 00:11:42.999 killing process with pid 1724178 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 1724178 00:11:42.999 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 1724178 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-save 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@789 -- # iptables-restore 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.259 13:22:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.800 00:11:45.800 real 0m16.318s 00:11:45.800 user 0m22.871s 00:11:45.800 sys 0m3.136s 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:45.800 ************************************ 00:11:45.800 END TEST nvmf_queue_depth 00:11:45.800 ************************************ 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:45.800 ************************************ 00:11:45.800 START TEST nvmf_target_multipath 00:11:45.800 ************************************ 00:11:45.800 13:22:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:45.800 * Looking for test storage... 00:11:45.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:45.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.800 --rc genhtml_branch_coverage=1 00:11:45.800 --rc genhtml_function_coverage=1 00:11:45.800 --rc genhtml_legend=1 00:11:45.800 --rc geninfo_all_blocks=1 00:11:45.800 --rc geninfo_unexecuted_blocks=1 00:11:45.800 00:11:45.800 ' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:45.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.800 --rc genhtml_branch_coverage=1 00:11:45.800 --rc genhtml_function_coverage=1 00:11:45.800 --rc genhtml_legend=1 00:11:45.800 --rc geninfo_all_blocks=1 00:11:45.800 --rc geninfo_unexecuted_blocks=1 00:11:45.800 00:11:45.800 ' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:45.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.800 --rc genhtml_branch_coverage=1 00:11:45.800 --rc genhtml_function_coverage=1 00:11:45.800 --rc genhtml_legend=1 00:11:45.800 --rc geninfo_all_blocks=1 00:11:45.800 --rc geninfo_unexecuted_blocks=1 00:11:45.800 00:11:45.800 ' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:45.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.800 --rc genhtml_branch_coverage=1 00:11:45.800 --rc genhtml_function_coverage=1 00:11:45.800 --rc genhtml_legend=1 00:11:45.800 --rc geninfo_all_blocks=1 00:11:45.800 --rc geninfo_unexecuted_blocks=1 00:11:45.800 00:11:45.800 ' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.800 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.801 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.801 13:22:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:47.707 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:47.707 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:47.707 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:47.708 Found net devices under 0000:09:00.0: cvl_0_0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:47.708 Found net devices under 0000:09:00.1: cvl_0_1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # is_hw=yes 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:47.708 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.708 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:11:47.708 00:11:47.708 --- 10.0.0.2 ping statistics --- 00:11:47.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.708 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.708 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.708 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:11:47.708 00:11:47.708 --- 10.0.0.1 ping statistics --- 00:11:47.708 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.708 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # return 0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:47.708 only one NIC for nvmf test 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:47.708 rmmod nvme_tcp 00:11:47.708 rmmod nvme_fabrics 00:11:47.708 rmmod nvme_keyring 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.708 13:22:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # nvmfcleanup 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-save 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:11:50.369 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@789 -- # iptables-restore 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:50.370 00:11:50.370 real 0m4.463s 00:11:50.370 user 0m0.862s 00:11:50.370 sys 0m1.615s 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:50.370 ************************************ 00:11:50.370 END TEST nvmf_target_multipath 00:11:50.370 ************************************ 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:50.370 ************************************ 00:11:50.370 START TEST nvmf_zcopy 00:11:50.370 ************************************ 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:50.370 * Looking for test storage... 00:11:50.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.370 --rc genhtml_branch_coverage=1 00:11:50.370 --rc genhtml_function_coverage=1 00:11:50.370 --rc genhtml_legend=1 00:11:50.370 --rc geninfo_all_blocks=1 00:11:50.370 --rc geninfo_unexecuted_blocks=1 00:11:50.370 00:11:50.370 ' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.370 --rc genhtml_branch_coverage=1 00:11:50.370 --rc genhtml_function_coverage=1 00:11:50.370 --rc genhtml_legend=1 00:11:50.370 --rc geninfo_all_blocks=1 00:11:50.370 --rc geninfo_unexecuted_blocks=1 00:11:50.370 00:11:50.370 ' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.370 --rc genhtml_branch_coverage=1 00:11:50.370 --rc genhtml_function_coverage=1 00:11:50.370 --rc genhtml_legend=1 00:11:50.370 --rc geninfo_all_blocks=1 00:11:50.370 --rc geninfo_unexecuted_blocks=1 00:11:50.370 00:11:50.370 ' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:50.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:50.370 --rc genhtml_branch_coverage=1 00:11:50.370 --rc genhtml_function_coverage=1 00:11:50.370 --rc genhtml_legend=1 00:11:50.370 --rc geninfo_all_blocks=1 00:11:50.370 --rc geninfo_unexecuted_blocks=1 00:11:50.370 00:11:50.370 ' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.370 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:50.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # prepare_net_devs 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # local -g is_hw=no 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # remove_spdk_ns 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:11:50.371 13:22:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:11:52.278 Found 0000:09:00.0 (0x8086 - 0x1592) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:11:52.278 Found 0000:09:00.1 (0x8086 - 0x1592) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:52.278 Found net devices under 0000:09:00.0: cvl_0_0 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ up == up ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:52.278 Found net devices under 0000:09:00.1: cvl_0_1 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # is_hw=yes 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.278 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:11:52.279 00:11:52.279 --- 10.0.0.2 ping statistics --- 00:11:52.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.279 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:11:52.279 00:11:52.279 --- 10.0.0.1 ping statistics --- 00:11:52.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.279 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # return 0 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # nvmfpid=1729167 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # waitforlisten 1729167 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 1729167 ']' 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.279 13:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.279 [2024-10-07 13:22:33.862496] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:52.279 [2024-10-07 13:22:33.862564] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.279 [2024-10-07 13:22:33.922822] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.537 [2024-10-07 13:22:34.030189] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.537 [2024-10-07 13:22:34.030275] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.537 [2024-10-07 13:22:34.030288] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.537 [2024-10-07 13:22:34.030299] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.537 [2024-10-07 13:22:34.030308] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.537 [2024-10-07 13:22:34.030870] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 [2024-10-07 13:22:34.178917] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 [2024-10-07 13:22:34.195175] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 malloc0 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:11:52.537 { 00:11:52.537 "params": { 00:11:52.537 "name": "Nvme$subsystem", 00:11:52.537 "trtype": "$TEST_TRANSPORT", 00:11:52.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:52.537 "adrfam": "ipv4", 00:11:52.537 "trsvcid": "$NVMF_PORT", 00:11:52.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:52.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:52.537 "hdgst": ${hdgst:-false}, 00:11:52.537 "ddgst": ${ddgst:-false} 00:11:52.537 }, 00:11:52.537 "method": "bdev_nvme_attach_controller" 00:11:52.537 } 00:11:52.537 EOF 00:11:52.537 )") 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:11:52.537 13:22:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:11:52.537 "params": { 00:11:52.537 "name": "Nvme1", 00:11:52.537 "trtype": "tcp", 00:11:52.537 "traddr": "10.0.0.2", 00:11:52.537 "adrfam": "ipv4", 00:11:52.537 "trsvcid": "4420", 00:11:52.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:52.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:52.537 "hdgst": false, 00:11:52.537 "ddgst": false 00:11:52.537 }, 00:11:52.537 "method": "bdev_nvme_attach_controller" 00:11:52.537 }' 00:11:52.797 [2024-10-07 13:22:34.287881] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:11:52.797 [2024-10-07 13:22:34.287964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1729297 ] 00:11:52.797 [2024-10-07 13:22:34.343068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.797 [2024-10-07 13:22:34.452345] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.054 Running I/O for 10 seconds... 00:11:55.365 5790.00 IOPS, 45.23 MiB/s [2024-10-07T11:22:38.012Z] 5839.00 IOPS, 45.62 MiB/s [2024-10-07T11:22:38.951Z] 5868.33 IOPS, 45.85 MiB/s [2024-10-07T11:22:39.891Z] 5867.75 IOPS, 45.84 MiB/s [2024-10-07T11:22:40.828Z] 5879.40 IOPS, 45.93 MiB/s [2024-10-07T11:22:41.768Z] 5876.83 IOPS, 45.91 MiB/s [2024-10-07T11:22:42.706Z] 5885.43 IOPS, 45.98 MiB/s [2024-10-07T11:22:44.084Z] 5884.75 IOPS, 45.97 MiB/s [2024-10-07T11:22:45.021Z] 5885.89 IOPS, 45.98 MiB/s [2024-10-07T11:22:45.021Z] 5884.40 IOPS, 45.97 MiB/s 00:12:03.309 Latency(us) 00:12:03.309 [2024-10-07T11:22:45.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.309 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:03.309 Verification LBA range: start 0x0 length 0x1000 00:12:03.309 Nvme1n1 : 10.01 5889.23 46.01 0.00 0.00 21678.40 2888.44 29515.47 00:12:03.309 [2024-10-07T11:22:45.022Z] =================================================================================================================== 00:12:03.310 [2024-10-07T11:22:45.022Z] Total : 5889.23 46.01 0.00 0.00 21678.40 2888.44 29515.47 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1730448 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # config=() 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # local subsystem config 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:03.310 { 00:12:03.310 "params": { 00:12:03.310 "name": "Nvme$subsystem", 00:12:03.310 "trtype": "$TEST_TRANSPORT", 00:12:03.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:03.310 "adrfam": "ipv4", 00:12:03.310 "trsvcid": "$NVMF_PORT", 00:12:03.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:03.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:03.310 "hdgst": ${hdgst:-false}, 00:12:03.310 "ddgst": ${ddgst:-false} 00:12:03.310 }, 00:12:03.310 "method": "bdev_nvme_attach_controller" 00:12:03.310 } 00:12:03.310 EOF 00:12:03.310 )") 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # cat 00:12:03.310 [2024-10-07 13:22:44.949180] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.949221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # jq . 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@583 -- # IFS=, 00:12:03.310 13:22:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:03.310 "params": { 00:12:03.310 "name": "Nvme1", 00:12:03.310 "trtype": "tcp", 00:12:03.310 "traddr": "10.0.0.2", 00:12:03.310 "adrfam": "ipv4", 00:12:03.310 "trsvcid": "4420", 00:12:03.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:03.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:03.310 "hdgst": false, 00:12:03.310 "ddgst": false 00:12:03.310 }, 00:12:03.310 "method": "bdev_nvme_attach_controller" 00:12:03.310 }' 00:12:03.310 [2024-10-07 13:22:44.957145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.957168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:44.965162] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.965183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:44.973184] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.973204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:44.981210] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.981231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:44.988503] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:03.310 [2024-10-07 13:22:44.988562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1730448 ] 00:12:03.310 [2024-10-07 13:22:44.989229] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.989249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:44.997253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:44.997274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:45.005270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:45.005291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:45.013293] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:45.013313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.310 [2024-10-07 13:22:45.021315] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.310 [2024-10-07 13:22:45.021350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.029340] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.029361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.037358] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.037379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.045379] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.045401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.046694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.568 [2024-10-07 13:22:45.053421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.053454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.061469] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.061506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.069449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.069472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.077466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.077488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.085486] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.085508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.093507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.093528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.101529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.101550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.109580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.109612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.117584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.117612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.125592] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.125613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.133614] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.133635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.141639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.141684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.149681] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.568 [2024-10-07 13:22:45.149705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.568 [2024-10-07 13:22:45.157702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.157734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.158373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:03.569 [2024-10-07 13:22:45.165743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.165766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.173768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.173797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.181793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.181831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.189822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.189860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.197840] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.197879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.205859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.205897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.213884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.213922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.221871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.221895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.229923] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.229975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.237956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.237994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.245969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.246000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.253969] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.253991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.261989] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.262023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.270344] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.270371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.569 [2024-10-07 13:22:45.278367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.569 [2024-10-07 13:22:45.278392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.286397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.286420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.294421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.294444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.302445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.302467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.310470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.310493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.318488] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.318509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.326510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.326531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.334533] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.334553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.342558] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.342579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.350582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.350606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.358601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.358624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.366624] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.366644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.374661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.828 [2024-10-07 13:22:45.374689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.828 [2024-10-07 13:22:45.382692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.382726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.390741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.390765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.398748] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.398770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.406768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.406789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.414778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.414799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.422805] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.422827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.430850] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.430874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.438851] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.438872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.446946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.446973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.454901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.454927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 Running I/O for 5 seconds... 00:12:03.829 [2024-10-07 13:22:45.462920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.462958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.477319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.477348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.487889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.487919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.498567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.498597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.509616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.509660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.522757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.522794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:03.829 [2024-10-07 13:22:45.532893] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:03.829 [2024-10-07 13:22:45.532922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.544089] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.544119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.554877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.554906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.565609] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.565651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.578579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.578607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.590330] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.590374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.600325] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.600352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.610947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.610976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.621697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.621726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.634791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.634821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.645227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.645255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.656185] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.656213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.669023] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.669051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.678846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.678874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.689692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.689721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.702249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.702277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.712058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.712087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.723426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.723454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.735787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.735828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.746342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.746370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.756737] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.756767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.767053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.767083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.777761] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.777801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.788603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.788632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.088 [2024-10-07 13:22:45.799319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.088 [2024-10-07 13:22:45.799348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.809785] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.809814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.820081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.820109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.830991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.831020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.841598] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.841627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.852497] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.852527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.863342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.863371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.876588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.876617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.886901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.886929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.897584] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.897612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.910604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.910633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.921070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.921099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.931857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.931886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.944616] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.944651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.954702] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.954731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.965175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.965203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.975721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.975750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.986714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.986743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:45.997907] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:45.997936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:46.008653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:46.008706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:46.019744] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:46.019772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:46.032277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:46.032307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:46.043554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:46.043583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.348 [2024-10-07 13:22:46.052579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.348 [2024-10-07 13:22:46.052607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.064462] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.064492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.077101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.077129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.087097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.087125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.097470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.097498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.108105] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.108134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.120721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.120749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.130705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.130734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.141271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.141313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.153956] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.153998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.163836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.163864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.174593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.174634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.187182] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.187211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.197444] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.197472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.208321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.208349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.218901] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.218930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.229690] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.229722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.242540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.242568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.252640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.252693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.263537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.263565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.276251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.276279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.287732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.607 [2024-10-07 13:22:46.287761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.607 [2024-10-07 13:22:46.297151] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.608 [2024-10-07 13:22:46.297179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.608 [2024-10-07 13:22:46.309217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.608 [2024-10-07 13:22:46.309246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.608 [2024-10-07 13:22:46.319839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.608 [2024-10-07 13:22:46.319869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.330440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.330469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.340769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.340799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.351606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.351634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.362195] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.362224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.373256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.373284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.385934] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.385963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.395983] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.396026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.406811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.406840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.417335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.417363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.427839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.427869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.438625] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.438654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.449394] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.449422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.463106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.463134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 11776.00 IOPS, 92.00 MiB/s [2024-10-07T11:22:46.579Z] [2024-10-07 13:22:46.473720] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.473753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.484510] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.484538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.497380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.497408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.507550] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.507577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.518290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.518318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.529511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.529539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.540467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.540495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.552970] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.553013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.562883] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.562911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:04.867 [2024-10-07 13:22:46.573405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:04.867 [2024-10-07 13:22:46.573433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.585958] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.586003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.595795] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.595824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.606603] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.606632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.617433] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.617461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.630270] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.630297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.640820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.640849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.651308] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.651337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.662278] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.662305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.672892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.672922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.685639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.685674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.695656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.695710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.706561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.706589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.719204] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.719232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.729230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.729258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.740334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.740364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.751093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.751121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.762145] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.762175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.772913] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.772952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.783513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.783541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.795936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.795965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.807929] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.807972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.817552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.817579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.828738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.828766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.127 [2024-10-07 13:22:46.839708] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.127 [2024-10-07 13:22:46.839737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.850084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.850114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.862656] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.862708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.872835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.872863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.883788] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.883817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.896779] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.896807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.907412] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.907441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.918168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.918197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.928899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.928928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.939523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.939552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.951753] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.951782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.961458] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.961486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.972383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.972411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.982857] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.982895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:46.993411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:46.993440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.003952] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.003995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.014772] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.014800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.028043] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.028071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.039640] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.039677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.049206] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.049235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.059899] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.059927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.072685] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.072713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.082859] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.082888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.387 [2024-10-07 13:22:47.093629] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.387 [2024-10-07 13:22:47.093658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.105756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.105785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.115544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.115573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.126216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.126244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.136282] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.136311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.146854] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.146884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.157675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.157703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.168392] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.168420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.180846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.180875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.192473] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.192512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.201524] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.201553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.213149] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.213178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.225928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.225957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.236375] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.236404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.247139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.247166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.257368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.257396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.267915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.267944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.278633] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.278660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.289421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.289448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.300003] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.300031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.310878] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.310906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.321709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.321738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.646 [2024-10-07 13:22:47.332717] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.646 [2024-10-07 13:22:47.332747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.647 [2024-10-07 13:22:47.343663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.647 [2024-10-07 13:22:47.343715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.647 [2024-10-07 13:22:47.356413] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.647 [2024-10-07 13:22:47.356441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.366000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.366045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.377596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.377623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.390060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.390088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.399276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.399316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.410457] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.410486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.420808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.420837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.431242] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.431270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.441915] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.441955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.452787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.452818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.463294] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.463322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 11823.00 IOPS, 92.37 MiB/s [2024-10-07T11:22:47.619Z] [2024-10-07 13:22:47.476100] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.476128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.486407] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.486435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.497430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.497458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.508267] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.508295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.519106] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.519135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.529797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.529826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.541013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.541041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.551804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.551834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.562594] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.562622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.573158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.573186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.584507] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.584535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.595576] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.595604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.606793] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.606822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:05.907 [2024-10-07 13:22:47.617583] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:05.907 [2024-10-07 13:22:47.617611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.628742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.628771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.639399] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.639427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.652252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.652280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.662567] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.662595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.673347] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.673375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.686191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.686220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.696307] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.696334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.707172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.707200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.717863] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.167 [2024-10-07 13:22:47.717892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.167 [2024-10-07 13:22:47.728694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.728723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.741403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.741432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.753169] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.753198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.762575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.762604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.774084] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.774112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.784684] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.784723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.794818] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.794848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.805471] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.805499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.816305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.816333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.828942] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.828985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.838870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.838898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.849814] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.849843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.861035] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.861064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.168 [2024-10-07 13:22:47.872082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.168 [2024-10-07 13:22:47.872110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.885312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.885340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.895792] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.895820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.906086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.906113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.916579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.916606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.927762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.927790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.938762] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.938790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.950024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.950050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.960981] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.961024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.972033] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.972061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.982743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.982773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:47.995117] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:47.995145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.004554] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.004582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.015428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.015464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.028252] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.028279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.038908] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.038936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.049170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.049198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.060005] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.427 [2024-10-07 13:22:48.060032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.427 [2024-10-07 13:22:48.072423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.072451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.082775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.082805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.094322] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.094351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.105246] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.105273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.116474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.116501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.127312] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.127340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.428 [2024-10-07 13:22:48.137920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.428 [2024-10-07 13:22:48.137948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.687 [2024-10-07 13:22:48.148920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.687 [2024-10-07 13:22:48.148964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.687 [2024-10-07 13:22:48.159992] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.687 [2024-10-07 13:22:48.160035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.687 [2024-10-07 13:22:48.170769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.687 [2024-10-07 13:22:48.170797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.687 [2024-10-07 13:22:48.181446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.687 [2024-10-07 13:22:48.181474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.687 [2024-10-07 13:22:48.191940] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.191983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.202894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.202923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.215572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.215599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.227337] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.227375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.236398] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.236426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.248133] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.248161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.260299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.260329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.270637] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.270682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.281115] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.281143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.291647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.291682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.302170] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.302198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.312912] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.312939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.325980] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.326009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.336164] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.336193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.346651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.346689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.357570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.357598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.370046] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.370074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.380439] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.380467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.688 [2024-10-07 13:22:48.391479] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.688 [2024-10-07 13:22:48.391508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.402424] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.402453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.412715] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.412743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.423034] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.423062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.433819] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.433876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.444783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.444811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.457111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.457140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.466662] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.466713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 11818.67 IOPS, 92.33 MiB/s [2024-10-07T11:22:48.660Z] [2024-10-07 13:22:48.477639] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.477692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.488378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.488405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.499534] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.499562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.512440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.512468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.522650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.522693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.533426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.533456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.546110] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.546139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.556163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.556192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.567150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.567194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.580445] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.580473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.590939] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.590982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.601721] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.601750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.614405] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.614434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.623680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.623721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.635024] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.635053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.645712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.645741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:06.948 [2024-10-07 13:22:48.656738] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:06.948 [2024-10-07 13:22:48.656767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.208 [2024-10-07 13:22:48.667482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.208 [2024-10-07 13:22:48.667527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.678470] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.678498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.690741] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.690770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.700932] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.700961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.712094] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.712122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.724766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.724795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.735000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.735028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.745713] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.745758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.758802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.758831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.769262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.769290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.780492] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.780520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.793849] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.793878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.803985] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.804014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.815030] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.815059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.826032] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.826060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.836959] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.836987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.849452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.849480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.859552] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.859580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.870791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.870820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.883751] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.883780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.894219] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.894246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.904789] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.904817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.209 [2024-10-07 13:22:48.915644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.209 [2024-10-07 13:22:48.915681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.926591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.926619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.937060] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.937088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.947990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.948018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.958833] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.958862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.971900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.971929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.982247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.982275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:48.992986] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:48.993013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.003559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.003587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.014453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.014481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.028249] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.028276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.038966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.038994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.049595] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.049623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.060348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.060376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.071698] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.071725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.082191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.082218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.093037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.093065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.103620] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.103648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.114306] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.114334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.125319] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.125349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.136054] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.136081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.148512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.148539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.158198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.158226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.469 [2024-10-07 13:22:49.169947] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.469 [2024-10-07 13:22:49.169977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.182716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.182745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.193102] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.193130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.204261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.204288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.217188] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.217216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.227451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.227479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.238222] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.238249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.251025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.251053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.261244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.261273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.272215] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.272243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.283196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.283224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.294007] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.294035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.306703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.306732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.317139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.317167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.327694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.327723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.338328] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.338356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.729 [2024-10-07 13:22:49.348827] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.729 [2024-10-07 13:22:49.348856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.359543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.359571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.370221] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.370249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.381226] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.381254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.392070] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.392098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.404355] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.404383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.414074] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.414102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.424714] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.424743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.730 [2024-10-07 13:22:49.437606] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.730 [2024-10-07 13:22:49.437634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.447885] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.447914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.458661] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.458699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.471820] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.471848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 11803.25 IOPS, 92.21 MiB/s [2024-10-07T11:22:49.700Z] [2024-10-07 13:22:49.482049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.482086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.492768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.492806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.503331] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.503360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.514228] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.514272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.524838] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.524866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.535771] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.535799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.546216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.546246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.558645] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.558682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.568649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.568686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.579314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.579341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.590016] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.590046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.600705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.600734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.611187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.611217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.621920] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.621949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.635190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.635220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.645756] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.645786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.656087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.656116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.666647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.666684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.677573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.677602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.690766] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.690802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:07.988 [2024-10-07 13:22:49.700941] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:07.988 [2024-10-07 13:22:49.700970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.711589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.711634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.722842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.722871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.733817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.733846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.744700] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.744731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.755768] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.755796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.766493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.247 [2024-10-07 13:22:49.766521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.247 [2024-10-07 13:22:49.777573] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.777601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.790244] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.790271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.800158] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.800187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.810968] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.810996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.823890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.823930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.834283] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.834311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.844870] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.844898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.856013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.856041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.866858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.866887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.880230] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.880257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.890543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.890571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.901048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.901085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.911865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.911892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.922001] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.922029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.932511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.932539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.943374] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.943402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.248 [2024-10-07 13:22:49.953758] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.248 [2024-10-07 13:22:49.953797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:49.964490] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:49.964519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:49.975746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:49.975775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:49.986787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:49.986816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:49.997694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:49.997724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.010202] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.010232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.019974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.020004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.030493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.030522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.041570] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.041598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.053997] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.054025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.064248] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.064275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.075285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.075313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.086135] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.086163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.096943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.096971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.109750] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.109778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.121711] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.121740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.130868] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.130897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.143626] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.143652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.153545] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.153573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.164468] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.164495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.182103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.182132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.192455] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.192482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.203027] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.203054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.508 [2024-10-07 13:22:50.213826] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.508 [2024-10-07 13:22:50.213854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.224796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.224825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.237561] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.237589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.247808] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.247836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.258604] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.258631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.269357] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.269385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.280142] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.280170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.292752] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.292780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.302716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.302744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.313103] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.313132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.323696] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.323725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.334834] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.334862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.768 [2024-10-07 13:22:50.347521] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.768 [2024-10-07 13:22:50.347549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.357987] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.358030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.369271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.369299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.381705] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.381734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.391423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.391450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.403056] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.403085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.415622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.415677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.425875] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.425904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.437037] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.437078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.449778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.449807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.460058] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.460085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 [2024-10-07 13:22:50.470572] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.470615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:08.769 11801.00 IOPS, 92.20 MiB/s [2024-10-07T11:22:50.481Z] [2024-10-07 13:22:50.479380] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:08.769 [2024-10-07 13:22:50.479421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 00:12:09.027 Latency(us) 00:12:09.027 [2024-10-07T11:22:50.739Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.027 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:09.027 Nvme1n1 : 5.01 11809.64 92.26 0.00 0.00 10826.55 4636.07 22913.33 00:12:09.027 [2024-10-07T11:22:50.739Z] =================================================================================================================== 00:12:09.027 [2024-10-07T11:22:50.739Z] Total : 11809.64 92.26 0.00 0.00 10826.55 4636.07 22913.33 00:12:09.027 [2024-10-07 13:22:50.485191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.485239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.493196] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.493220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.501211] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.501232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.509304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.509357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.517321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.517377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.525334] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.525386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.533356] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.533406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.541373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.541424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.549409] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.549466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.557420] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.557472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.565440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.565492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.573466] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.573520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.581493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.581549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.589513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.589567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.597537] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.597593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.605553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.605605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.613575] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.613629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.621601] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.621656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.629559] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.629580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.637580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.637615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.645605] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.645626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.653622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.653658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.661695] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.661736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.669725] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.669774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.677757] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.677805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.685746] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.685768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.693745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.693767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.701769] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.701790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.709791] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.709813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.717872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.717926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.725892] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.725941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.027 [2024-10-07 13:22:50.733869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.027 [2024-10-07 13:22:50.733894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.287 [2024-10-07 13:22:50.741877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.287 [2024-10-07 13:22:50.741899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.287 [2024-10-07 13:22:50.749900] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:09.287 [2024-10-07 13:22:50.749923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:09.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1730448) - No such process 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1730448 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.287 delay0 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.287 13:22:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:09.287 [2024-10-07 13:22:50.828205] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:15.858 Initializing NVMe Controllers 00:12:15.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:15.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:15.858 Initialization complete. Launching workers. 00:12:15.858 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:12:15.858 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:12:15.858 success 151, unsuccessful 205, failed 0 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:15.858 rmmod nvme_tcp 00:12:15.858 rmmod nvme_fabrics 00:12:15.858 rmmod nvme_keyring 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@515 -- # '[' -n 1729167 ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # killprocess 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 1729167 ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1729167' 00:12:15.858 killing process with pid 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 1729167 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-save 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@789 -- # iptables-restore 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.858 13:22:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.778 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.778 00:12:17.778 real 0m27.978s 00:12:17.778 user 0m41.710s 00:12:17.778 sys 0m7.958s 00:12:17.778 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:17.778 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:17.778 ************************************ 00:12:17.778 END TEST nvmf_zcopy 00:12:17.778 ************************************ 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:18.037 ************************************ 00:12:18.037 START TEST nvmf_nmic 00:12:18.037 ************************************ 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:18.037 * Looking for test storage... 00:12:18.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:18.037 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:18.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.038 --rc genhtml_branch_coverage=1 00:12:18.038 --rc genhtml_function_coverage=1 00:12:18.038 --rc genhtml_legend=1 00:12:18.038 --rc geninfo_all_blocks=1 00:12:18.038 --rc geninfo_unexecuted_blocks=1 00:12:18.038 00:12:18.038 ' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:18.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.038 --rc genhtml_branch_coverage=1 00:12:18.038 --rc genhtml_function_coverage=1 00:12:18.038 --rc genhtml_legend=1 00:12:18.038 --rc geninfo_all_blocks=1 00:12:18.038 --rc geninfo_unexecuted_blocks=1 00:12:18.038 00:12:18.038 ' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:18.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.038 --rc genhtml_branch_coverage=1 00:12:18.038 --rc genhtml_function_coverage=1 00:12:18.038 --rc genhtml_legend=1 00:12:18.038 --rc geninfo_all_blocks=1 00:12:18.038 --rc geninfo_unexecuted_blocks=1 00:12:18.038 00:12:18.038 ' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:18.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.038 --rc genhtml_branch_coverage=1 00:12:18.038 --rc genhtml_function_coverage=1 00:12:18.038 --rc genhtml_legend=1 00:12:18.038 --rc geninfo_all_blocks=1 00:12:18.038 --rc geninfo_unexecuted_blocks=1 00:12:18.038 00:12:18.038 ' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:18.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:18.038 13:22:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:20.584 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:20.584 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:20.584 Found net devices under 0000:09:00.0: cvl_0_0 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:20.584 Found net devices under 0000:09:00.1: cvl_0_1 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # is_hw=yes 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.584 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:12:20.585 00:12:20.585 --- 10.0.0.2 ping statistics --- 00:12:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.585 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:12:20.585 00:12:20.585 --- 10.0.0.1 ping statistics --- 00:12:20.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.585 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # return 0 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # nvmfpid=1733681 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # waitforlisten 1733681 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 1733681 ']' 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:20.585 13:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.585 [2024-10-07 13:23:01.945999] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:20.585 [2024-10-07 13:23:01.946096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.585 [2024-10-07 13:23:02.005391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.585 [2024-10-07 13:23:02.121913] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.585 [2024-10-07 13:23:02.121993] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.585 [2024-10-07 13:23:02.122006] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.585 [2024-10-07 13:23:02.122041] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.585 [2024-10-07 13:23:02.122050] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.585 [2024-10-07 13:23:02.123597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.585 [2024-10-07 13:23:02.123698] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.585 [2024-10-07 13:23:02.123740] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.585 [2024-10-07 13:23:02.123743] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.585 [2024-10-07 13:23:02.275402] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.585 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 Malloc0 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 [2024-10-07 13:23:02.328527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:20.846 test case1: single bdev can't be used in multiple subsystems 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 [2024-10-07 13:23:02.352346] bdev.c:8202:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:20.846 [2024-10-07 13:23:02.352375] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:20.846 [2024-10-07 13:23:02.352389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.846 request: 00:12:20.846 { 00:12:20.846 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:20.846 "namespace": { 00:12:20.846 "bdev_name": "Malloc0", 00:12:20.846 "no_auto_visible": false 00:12:20.846 }, 00:12:20.846 "method": "nvmf_subsystem_add_ns", 00:12:20.846 "req_id": 1 00:12:20.846 } 00:12:20.846 Got JSON-RPC error response 00:12:20.846 response: 00:12:20.846 { 00:12:20.846 "code": -32602, 00:12:20.846 "message": "Invalid parameters" 00:12:20.846 } 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:20.846 Adding namespace failed - expected result. 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:20.846 test case2: host connect to nvmf target in multiple paths 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:20.846 [2024-10-07 13:23:02.364471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.846 13:23:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.415 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:22.351 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.351 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.351 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.351 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:22.351 13:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:12:24.255 13:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:24.255 [global] 00:12:24.255 thread=1 00:12:24.255 invalidate=1 00:12:24.255 rw=write 00:12:24.255 time_based=1 00:12:24.255 runtime=1 00:12:24.255 ioengine=libaio 00:12:24.255 direct=1 00:12:24.255 bs=4096 00:12:24.255 iodepth=1 00:12:24.255 norandommap=0 00:12:24.255 numjobs=1 00:12:24.255 00:12:24.255 verify_dump=1 00:12:24.255 verify_backlog=512 00:12:24.255 verify_state_save=0 00:12:24.255 do_verify=1 00:12:24.255 verify=crc32c-intel 00:12:24.255 [job0] 00:12:24.255 filename=/dev/nvme0n1 00:12:24.255 Could not set queue depth (nvme0n1) 00:12:24.255 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:24.255 fio-3.35 00:12:24.255 Starting 1 thread 00:12:25.633 00:12:25.633 job0: (groupid=0, jobs=1): err= 0: pid=1734296: Mon Oct 7 13:23:07 2024 00:12:25.633 read: IOPS=1437, BW=5750KiB/s (5888kB/s)(5888KiB/1024msec) 00:12:25.633 slat (nsec): min=6230, max=58985, avg=12740.84, stdev=6739.45 00:12:25.633 clat (usec): min=174, max=41091, avg=496.78, stdev=3346.53 00:12:25.633 lat (usec): min=181, max=41109, avg=509.52, stdev=3347.45 00:12:25.633 clat percentiles (usec): 00:12:25.633 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:12:25.633 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:12:25.633 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 258], 95.00th=[ 297], 00:12:25.633 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:25.633 | 99.99th=[41157] 00:12:25.633 write: IOPS=1500, BW=6000KiB/s (6144kB/s)(6144KiB/1024msec); 0 zone resets 00:12:25.633 slat (nsec): min=5935, max=39440, avg=14874.39, stdev=4239.94 00:12:25.633 clat (usec): min=115, max=281, avg=155.08, stdev=21.40 00:12:25.633 lat (usec): min=122, max=297, avg=169.95, stdev=22.30 00:12:25.633 clat percentiles (usec): 00:12:25.633 | 1.00th=[ 125], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 139], 00:12:25.633 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 153], 00:12:25.633 | 70.00th=[ 161], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 192], 00:12:25.633 | 99.00th=[ 239], 99.50th=[ 255], 99.90th=[ 260], 99.95th=[ 281], 00:12:25.633 | 99.99th=[ 281] 00:12:25.633 bw ( KiB/s): min= 792, max=11496, per=100.00%, avg=6144.00, stdev=7568.87, samples=2 00:12:25.633 iops : min= 198, max= 2874, avg=1536.00, stdev=1892.22, samples=2 00:12:25.633 lat (usec) : 250=93.55%, 500=5.98%, 750=0.13% 00:12:25.633 lat (msec) : 50=0.33% 00:12:25.633 cpu : usr=2.15%, sys=4.30%, ctx=3008, majf=0, minf=1 00:12:25.633 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:25.633 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.633 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:25.633 issued rwts: total=1472,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:25.633 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:25.633 00:12:25.633 Run status group 0 (all jobs): 00:12:25.633 READ: bw=5750KiB/s (5888kB/s), 5750KiB/s-5750KiB/s (5888kB/s-5888kB/s), io=5888KiB (6029kB), run=1024-1024msec 00:12:25.633 WRITE: bw=6000KiB/s (6144kB/s), 6000KiB/s-6000KiB/s (6144kB/s-6144kB/s), io=6144KiB (6291kB), run=1024-1024msec 00:12:25.633 00:12:25.633 Disk stats (read/write): 00:12:25.633 nvme0n1: ios=1518/1536, merge=0/0, ticks=597/219, in_queue=816, util=91.48% 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:12:25.633 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.634 rmmod nvme_tcp 00:12:25.634 rmmod nvme_fabrics 00:12:25.634 rmmod nvme_keyring 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@515 -- # '[' -n 1733681 ']' 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # killprocess 1733681 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 1733681 ']' 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 1733681 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1733681 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1733681' 00:12:25.634 killing process with pid 1733681 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 1733681 00:12:25.634 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 1733681 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-save 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@789 -- # iptables-restore 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:26.204 13:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.179 00:12:28.179 real 0m10.161s 00:12:28.179 user 0m22.432s 00:12:28.179 sys 0m2.572s 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:28.179 ************************************ 00:12:28.179 END TEST nvmf_nmic 00:12:28.179 ************************************ 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:28.179 ************************************ 00:12:28.179 START TEST nvmf_fio_target 00:12:28.179 ************************************ 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:28.179 * Looking for test storage... 00:12:28.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:28.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.179 --rc genhtml_branch_coverage=1 00:12:28.179 --rc genhtml_function_coverage=1 00:12:28.179 --rc genhtml_legend=1 00:12:28.179 --rc geninfo_all_blocks=1 00:12:28.179 --rc geninfo_unexecuted_blocks=1 00:12:28.179 00:12:28.179 ' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:28.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.179 --rc genhtml_branch_coverage=1 00:12:28.179 --rc genhtml_function_coverage=1 00:12:28.179 --rc genhtml_legend=1 00:12:28.179 --rc geninfo_all_blocks=1 00:12:28.179 --rc geninfo_unexecuted_blocks=1 00:12:28.179 00:12:28.179 ' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:28.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.179 --rc genhtml_branch_coverage=1 00:12:28.179 --rc genhtml_function_coverage=1 00:12:28.179 --rc genhtml_legend=1 00:12:28.179 --rc geninfo_all_blocks=1 00:12:28.179 --rc geninfo_unexecuted_blocks=1 00:12:28.179 00:12:28.179 ' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:28.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.179 --rc genhtml_branch_coverage=1 00:12:28.179 --rc genhtml_function_coverage=1 00:12:28.179 --rc genhtml_legend=1 00:12:28.179 --rc geninfo_all_blocks=1 00:12:28.179 --rc geninfo_unexecuted_blocks=1 00:12:28.179 00:12:28.179 ' 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.179 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.440 13:23:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.343 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.343 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:30.343 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:30.344 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:30.344 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:30.344 Found net devices under 0000:09:00.0: cvl_0_0 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:30.344 Found net devices under 0000:09:00.1: cvl_0_1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # is_hw=yes 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.344 13:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:30.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:12:30.344 00:12:30.344 --- 10.0.0.2 ping statistics --- 00:12:30.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.344 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:12:30.344 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:30.603 00:12:30.603 --- 10.0.0.1 ping statistics --- 00:12:30.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.603 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # return 0 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # nvmfpid=1736283 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # waitforlisten 1736283 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 1736283 ']' 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:30.603 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.603 [2024-10-07 13:23:12.138071] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:30.603 [2024-10-07 13:23:12.138156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.603 [2024-10-07 13:23:12.200478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.603 [2024-10-07 13:23:12.308990] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.603 [2024-10-07 13:23:12.309061] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.603 [2024-10-07 13:23:12.309073] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.603 [2024-10-07 13:23:12.309084] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.603 [2024-10-07 13:23:12.309099] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.603 [2024-10-07 13:23:12.310713] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.603 [2024-10-07 13:23:12.310772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.603 [2024-10-07 13:23:12.310738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.603 [2024-10-07 13:23:12.310776] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.861 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:31.119 [2024-10-07 13:23:12.743674] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.119 13:23:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.376 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:31.376 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:31.943 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:31.943 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:32.203 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:32.203 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:32.491 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:32.491 13:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:32.750 13:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:33.008 13:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:33.008 13:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:33.266 13:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:33.266 13:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:33.524 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:33.524 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:33.782 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.040 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:34.040 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.298 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:34.298 13:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:34.556 13:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.813 [2024-10-07 13:23:16.448949] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.813 13:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:35.070 13:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:35.329 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:12:35.896 13:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:12:38.441 13:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:38.441 [global] 00:12:38.441 thread=1 00:12:38.441 invalidate=1 00:12:38.441 rw=write 00:12:38.441 time_based=1 00:12:38.441 runtime=1 00:12:38.441 ioengine=libaio 00:12:38.441 direct=1 00:12:38.441 bs=4096 00:12:38.441 iodepth=1 00:12:38.441 norandommap=0 00:12:38.441 numjobs=1 00:12:38.441 00:12:38.441 verify_dump=1 00:12:38.441 verify_backlog=512 00:12:38.441 verify_state_save=0 00:12:38.441 do_verify=1 00:12:38.441 verify=crc32c-intel 00:12:38.441 [job0] 00:12:38.441 filename=/dev/nvme0n1 00:12:38.441 [job1] 00:12:38.441 filename=/dev/nvme0n2 00:12:38.441 [job2] 00:12:38.441 filename=/dev/nvme0n3 00:12:38.441 [job3] 00:12:38.441 filename=/dev/nvme0n4 00:12:38.441 Could not set queue depth (nvme0n1) 00:12:38.441 Could not set queue depth (nvme0n2) 00:12:38.441 Could not set queue depth (nvme0n3) 00:12:38.441 Could not set queue depth (nvme0n4) 00:12:38.441 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.441 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.441 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.441 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:38.441 fio-3.35 00:12:38.441 Starting 4 threads 00:12:39.378 00:12:39.378 job0: (groupid=0, jobs=1): err= 0: pid=1737328: Mon Oct 7 13:23:21 2024 00:12:39.378 read: IOPS=1777, BW=7111KiB/s (7281kB/s)(7260KiB/1021msec) 00:12:39.378 slat (nsec): min=4933, max=68314, avg=15748.16, stdev=8214.40 00:12:39.378 clat (usec): min=185, max=40670, avg=306.55, stdev=953.07 00:12:39.378 lat (usec): min=191, max=40686, avg=322.30, stdev=953.49 00:12:39.378 clat percentiles (usec): 00:12:39.378 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:12:39.378 | 30.00th=[ 231], 40.00th=[ 247], 50.00th=[ 260], 60.00th=[ 273], 00:12:39.378 | 70.00th=[ 302], 80.00th=[ 334], 90.00th=[ 367], 95.00th=[ 490], 00:12:39.378 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 1893], 99.95th=[40633], 00:12:39.378 | 99.99th=[40633] 00:12:39.378 write: IOPS=2005, BW=8024KiB/s (8216kB/s)(8192KiB/1021msec); 0 zone resets 00:12:39.378 slat (nsec): min=6775, max=66336, avg=14026.49, stdev=8013.62 00:12:39.378 clat (usec): min=132, max=1447, avg=190.87, stdev=51.28 00:12:39.378 lat (usec): min=140, max=1471, avg=204.90, stdev=54.75 00:12:39.378 clat percentiles (usec): 00:12:39.378 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 151], 20.00th=[ 157], 00:12:39.378 | 30.00th=[ 163], 40.00th=[ 172], 50.00th=[ 184], 60.00th=[ 198], 00:12:39.378 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 239], 95.00th=[ 251], 00:12:39.378 | 99.00th=[ 293], 99.50th=[ 334], 99.90th=[ 392], 99.95th=[ 1270], 00:12:39.378 | 99.99th=[ 1450] 00:12:39.378 bw ( KiB/s): min= 8192, max= 8192, per=40.72%, avg=8192.00, stdev= 0.00, samples=2 00:12:39.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:12:39.378 lat (usec) : 250=70.59%, 500=27.21%, 750=2.07% 00:12:39.378 lat (msec) : 2=0.10%, 50=0.03% 00:12:39.378 cpu : usr=3.53%, sys=7.16%, ctx=3865, majf=0, minf=1 00:12:39.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.378 issued rwts: total=1815,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.378 job1: (groupid=0, jobs=1): err= 0: pid=1737329: Mon Oct 7 13:23:21 2024 00:12:39.378 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:39.378 slat (nsec): min=5776, max=60050, avg=13305.30, stdev=6087.90 00:12:39.378 clat (usec): min=177, max=41477, avg=268.07, stdev=913.51 00:12:39.378 lat (usec): min=185, max=41485, avg=281.37, stdev=913.49 00:12:39.378 clat percentiles (usec): 00:12:39.378 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:12:39.378 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:12:39.378 | 70.00th=[ 253], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 326], 00:12:39.378 | 99.00th=[ 510], 99.50th=[ 586], 99.90th=[ 988], 99.95th=[ 1336], 00:12:39.378 | 99.99th=[41681] 00:12:39.378 write: IOPS=2080, BW=8324KiB/s (8523kB/s)(8332KiB/1001msec); 0 zone resets 00:12:39.378 slat (nsec): min=6905, max=36210, avg=12895.99, stdev=5101.76 00:12:39.378 clat (usec): min=128, max=1336, avg=183.02, stdev=52.16 00:12:39.378 lat (usec): min=136, max=1348, avg=195.92, stdev=53.11 00:12:39.378 clat percentiles (usec): 00:12:39.378 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 153], 00:12:39.378 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 180], 00:12:39.378 | 70.00th=[ 190], 80.00th=[ 200], 90.00th=[ 227], 95.00th=[ 273], 00:12:39.378 | 99.00th=[ 371], 99.50th=[ 396], 99.90th=[ 461], 99.95th=[ 930], 00:12:39.378 | 99.99th=[ 1336] 00:12:39.378 bw ( KiB/s): min= 8192, max= 8192, per=40.72%, avg=8192.00, stdev= 0.00, samples=1 00:12:39.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:39.378 lat (usec) : 250=80.39%, 500=18.98%, 750=0.41%, 1000=0.15% 00:12:39.378 lat (msec) : 2=0.05%, 50=0.02% 00:12:39.378 cpu : usr=2.70%, sys=6.00%, ctx=4132, majf=0, minf=1 00:12:39.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 issued rwts: total=2048,2083,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.379 job2: (groupid=0, jobs=1): err= 0: pid=1737330: Mon Oct 7 13:23:21 2024 00:12:39.379 read: IOPS=148, BW=593KiB/s (607kB/s)(608KiB/1025msec) 00:12:39.379 slat (nsec): min=4697, max=34070, avg=11667.31, stdev=6425.02 00:12:39.379 clat (usec): min=206, max=41982, avg=5883.23, stdev=14138.15 00:12:39.379 lat (usec): min=216, max=42014, avg=5894.89, stdev=14142.17 00:12:39.379 clat percentiles (usec): 00:12:39.379 | 1.00th=[ 210], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:12:39.379 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 247], 00:12:39.379 | 70.00th=[ 251], 80.00th=[ 269], 90.00th=[41157], 95.00th=[41157], 00:12:39.379 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:39.379 | 99.99th=[42206] 00:12:39.379 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:12:39.379 slat (nsec): min=6314, max=35332, avg=10801.21, stdev=5277.48 00:12:39.379 clat (usec): min=148, max=1834, avg=236.09, stdev=129.16 00:12:39.379 lat (usec): min=155, max=1849, avg=246.89, stdev=131.14 00:12:39.379 clat percentiles (usec): 00:12:39.379 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 165], 00:12:39.379 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 198], 00:12:39.379 | 70.00th=[ 273], 80.00th=[ 330], 90.00th=[ 351], 95.00th=[ 371], 00:12:39.379 | 99.00th=[ 424], 99.50th=[ 1287], 99.90th=[ 1827], 99.95th=[ 1827], 00:12:39.379 | 99.99th=[ 1827] 00:12:39.379 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:39.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:39.379 lat (usec) : 250=66.42%, 500=29.82%, 750=0.15% 00:12:39.379 lat (msec) : 2=0.45%, 50=3.16% 00:12:39.379 cpu : usr=0.20%, sys=0.78%, ctx=664, majf=0, minf=2 00:12:39.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 issued rwts: total=152,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.379 job3: (groupid=0, jobs=1): err= 0: pid=1737331: Mon Oct 7 13:23:21 2024 00:12:39.379 read: IOPS=126, BW=507KiB/s (520kB/s)(508KiB/1001msec) 00:12:39.379 slat (nsec): min=7618, max=39243, avg=13327.73, stdev=7855.67 00:12:39.379 clat (usec): min=229, max=42052, avg=6546.24, stdev=14819.60 00:12:39.379 lat (usec): min=245, max=42070, avg=6559.56, stdev=14823.07 00:12:39.379 clat percentiles (usec): 00:12:39.379 | 1.00th=[ 243], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:12:39.379 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:12:39.379 | 70.00th=[ 334], 80.00th=[ 510], 90.00th=[41681], 95.00th=[42206], 00:12:39.379 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:39.379 | 99.99th=[42206] 00:12:39.379 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:39.379 slat (usec): min=7, max=22062, avg=57.63, stdev=974.43 00:12:39.379 clat (usec): min=168, max=1374, avg=264.53, stdev=89.20 00:12:39.379 lat (usec): min=178, max=22432, avg=322.16, stdev=983.09 00:12:39.379 clat percentiles (usec): 00:12:39.379 | 1.00th=[ 172], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 198], 00:12:39.379 | 30.00th=[ 208], 40.00th=[ 219], 50.00th=[ 243], 60.00th=[ 265], 00:12:39.379 | 70.00th=[ 310], 80.00th=[ 334], 90.00th=[ 359], 95.00th=[ 371], 00:12:39.379 | 99.00th=[ 461], 99.50th=[ 676], 99.90th=[ 1369], 99.95th=[ 1369], 00:12:39.379 | 99.99th=[ 1369] 00:12:39.379 bw ( KiB/s): min= 4096, max= 4096, per=20.36%, avg=4096.00, stdev= 0.00, samples=1 00:12:39.379 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:39.379 lat (usec) : 250=43.04%, 500=52.11%, 750=0.94%, 1000=0.63% 00:12:39.379 lat (msec) : 2=0.16%, 4=0.16%, 50=2.97% 00:12:39.379 cpu : usr=0.20%, sys=1.50%, ctx=643, majf=0, minf=1 00:12:39.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:39.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:39.379 issued rwts: total=127,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:39.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:39.379 00:12:39.379 Run status group 0 (all jobs): 00:12:39.379 READ: bw=15.8MiB/s (16.6MB/s), 507KiB/s-8184KiB/s (520kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1025msec 00:12:39.379 WRITE: bw=19.6MiB/s (20.6MB/s), 1998KiB/s-8324KiB/s (2046kB/s-8523kB/s), io=20.1MiB (21.1MB), run=1001-1025msec 00:12:39.379 00:12:39.379 Disk stats (read/write): 00:12:39.379 nvme0n1: ios=1585/1665, merge=0/0, ticks=637/314, in_queue=951, util=85.57% 00:12:39.379 nvme0n2: ios=1563/2004, merge=0/0, ticks=1279/357, in_queue=1636, util=89.52% 00:12:39.379 nvme0n3: ios=204/512, merge=0/0, ticks=763/120, in_queue=883, util=94.99% 00:12:39.379 nvme0n4: ios=94/512, merge=0/0, ticks=957/136, in_queue=1093, util=94.21% 00:12:39.379 13:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:39.379 [global] 00:12:39.379 thread=1 00:12:39.379 invalidate=1 00:12:39.379 rw=randwrite 00:12:39.379 time_based=1 00:12:39.379 runtime=1 00:12:39.379 ioengine=libaio 00:12:39.379 direct=1 00:12:39.379 bs=4096 00:12:39.379 iodepth=1 00:12:39.379 norandommap=0 00:12:39.379 numjobs=1 00:12:39.379 00:12:39.379 verify_dump=1 00:12:39.379 verify_backlog=512 00:12:39.379 verify_state_save=0 00:12:39.379 do_verify=1 00:12:39.379 verify=crc32c-intel 00:12:39.379 [job0] 00:12:39.379 filename=/dev/nvme0n1 00:12:39.379 [job1] 00:12:39.379 filename=/dev/nvme0n2 00:12:39.379 [job2] 00:12:39.379 filename=/dev/nvme0n3 00:12:39.637 [job3] 00:12:39.637 filename=/dev/nvme0n4 00:12:39.637 Could not set queue depth (nvme0n1) 00:12:39.637 Could not set queue depth (nvme0n2) 00:12:39.637 Could not set queue depth (nvme0n3) 00:12:39.637 Could not set queue depth (nvme0n4) 00:12:39.637 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.637 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.637 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.637 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:39.637 fio-3.35 00:12:39.637 Starting 4 threads 00:12:41.014 00:12:41.014 job0: (groupid=0, jobs=1): err= 0: pid=1737667: Mon Oct 7 13:23:22 2024 00:12:41.014 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:12:41.014 slat (nsec): min=6149, max=77517, avg=13416.89, stdev=6135.77 00:12:41.014 clat (usec): min=197, max=41442, avg=623.92, stdev=3803.22 00:12:41.014 lat (usec): min=205, max=41449, avg=637.34, stdev=3802.99 00:12:41.014 clat percentiles (usec): 00:12:41.014 | 1.00th=[ 210], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 235], 00:12:41.014 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:12:41.014 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 351], 00:12:41.014 | 99.00th=[ 865], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:12:41.014 | 99.99th=[41681] 00:12:41.014 write: IOPS=1259, BW=5039KiB/s (5160kB/s)(5044KiB/1001msec); 0 zone resets 00:12:41.014 slat (nsec): min=7002, max=69816, avg=22663.82, stdev=8676.58 00:12:41.014 clat (usec): min=143, max=543, avg=242.63, stdev=76.21 00:12:41.014 lat (usec): min=154, max=588, avg=265.29, stdev=78.59 00:12:41.014 clat percentiles (usec): 00:12:41.014 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 182], 00:12:41.014 | 30.00th=[ 190], 40.00th=[ 204], 50.00th=[ 217], 60.00th=[ 233], 00:12:41.014 | 70.00th=[ 262], 80.00th=[ 297], 90.00th=[ 359], 95.00th=[ 412], 00:12:41.014 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 537], 99.95th=[ 545], 00:12:41.014 | 99.99th=[ 545] 00:12:41.014 bw ( KiB/s): min= 5912, max= 5912, per=35.34%, avg=5912.00, stdev= 0.00, samples=1 00:12:41.014 iops : min= 1478, max= 1478, avg=1478.00, stdev= 0.00, samples=1 00:12:41.014 lat (usec) : 250=53.26%, 500=45.95%, 750=0.26%, 1000=0.09% 00:12:41.014 lat (msec) : 2=0.04%, 50=0.39% 00:12:41.014 cpu : usr=2.90%, sys=5.80%, ctx=2286, majf=0, minf=2 00:12:41.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.014 issued rwts: total=1024,1261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.014 job1: (groupid=0, jobs=1): err= 0: pid=1737668: Mon Oct 7 13:23:22 2024 00:12:41.014 read: IOPS=22, BW=89.1KiB/s (91.2kB/s)(92.0KiB/1033msec) 00:12:41.014 slat (nsec): min=7776, max=36305, avg=20836.13, stdev=9625.15 00:12:41.014 clat (usec): min=236, max=42106, avg=39378.27, stdev=8541.50 00:12:41.014 lat (usec): min=244, max=42122, avg=39399.11, stdev=8544.15 00:12:41.014 clat percentiles (usec): 00:12:41.014 | 1.00th=[ 237], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:41.014 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:41.014 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:12:41.014 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:41.014 | 99.99th=[42206] 00:12:41.014 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:12:41.014 slat (nsec): min=8550, max=50630, avg=16296.59, stdev=7983.60 00:12:41.014 clat (usec): min=153, max=411, avg=224.90, stdev=29.93 00:12:41.014 lat (usec): min=168, max=449, avg=241.20, stdev=29.85 00:12:41.014 clat percentiles (usec): 00:12:41.014 | 1.00th=[ 176], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 204], 00:12:41.014 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:12:41.014 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 255], 95.00th=[ 273], 00:12:41.014 | 99.00th=[ 351], 99.50th=[ 392], 99.90th=[ 412], 99.95th=[ 412], 00:12:41.014 | 99.99th=[ 412] 00:12:41.014 bw ( KiB/s): min= 4096, max= 4096, per=24.48%, avg=4096.00, stdev= 0.00, samples=1 00:12:41.014 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:41.014 lat (usec) : 250=82.99%, 500=12.90% 00:12:41.014 lat (msec) : 50=4.11% 00:12:41.014 cpu : usr=0.29%, sys=1.36%, ctx=536, majf=0, minf=1 00:12:41.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.014 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.015 job2: (groupid=0, jobs=1): err= 0: pid=1737669: Mon Oct 7 13:23:22 2024 00:12:41.015 read: IOPS=31, BW=127KiB/s (130kB/s)(132KiB/1036msec) 00:12:41.015 slat (nsec): min=8718, max=36101, avg=21878.79, stdev=9178.97 00:12:41.015 clat (usec): min=346, max=42061, avg=27578.02, stdev=19506.95 00:12:41.015 lat (usec): min=374, max=42077, avg=27599.90, stdev=19506.99 00:12:41.015 clat percentiles (usec): 00:12:41.015 | 1.00th=[ 347], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 412], 00:12:41.015 | 30.00th=[ 523], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:12:41.015 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:12:41.015 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:41.015 | 99.99th=[42206] 00:12:41.015 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:12:41.015 slat (nsec): min=7249, max=52218, avg=14344.03, stdev=7297.29 00:12:41.015 clat (usec): min=146, max=336, avg=224.61, stdev=27.15 00:12:41.015 lat (usec): min=161, max=367, avg=238.96, stdev=25.29 00:12:41.015 clat percentiles (usec): 00:12:41.015 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 196], 20.00th=[ 204], 00:12:41.015 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:12:41.015 | 70.00th=[ 235], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 273], 00:12:41.015 | 99.00th=[ 314], 99.50th=[ 322], 99.90th=[ 338], 99.95th=[ 338], 00:12:41.015 | 99.99th=[ 338] 00:12:41.015 bw ( KiB/s): min= 4096, max= 4096, per=24.48%, avg=4096.00, stdev= 0.00, samples=1 00:12:41.015 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:41.015 lat (usec) : 250=82.39%, 500=13.21%, 750=0.37% 00:12:41.015 lat (msec) : 50=4.04% 00:12:41.015 cpu : usr=0.58%, sys=0.58%, ctx=545, majf=0, minf=2 00:12:41.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.015 issued rwts: total=33,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.015 job3: (groupid=0, jobs=1): err= 0: pid=1737670: Mon Oct 7 13:23:22 2024 00:12:41.015 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6168KiB/1001msec) 00:12:41.015 slat (nsec): min=6807, max=58572, avg=13732.33, stdev=6075.54 00:12:41.015 clat (usec): min=186, max=41386, avg=323.02, stdev=1474.51 00:12:41.015 lat (usec): min=194, max=41404, avg=336.76, stdev=1474.81 00:12:41.015 clat percentiles (usec): 00:12:41.015 | 1.00th=[ 198], 5.00th=[ 215], 10.00th=[ 227], 20.00th=[ 239], 00:12:41.015 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:12:41.015 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 306], 95.00th=[ 318], 00:12:41.015 | 99.00th=[ 371], 99.50th=[ 791], 99.90th=[41157], 99.95th=[41157], 00:12:41.015 | 99.99th=[41157] 00:12:41.015 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:41.015 slat (nsec): min=8020, max=65595, avg=17083.68, stdev=9411.19 00:12:41.015 clat (usec): min=142, max=464, avg=209.96, stdev=61.48 00:12:41.015 lat (usec): min=151, max=493, avg=227.04, stdev=66.50 00:12:41.015 clat percentiles (usec): 00:12:41.015 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 159], 00:12:41.015 | 30.00th=[ 169], 40.00th=[ 180], 50.00th=[ 190], 60.00th=[ 206], 00:12:41.015 | 70.00th=[ 227], 80.00th=[ 245], 90.00th=[ 293], 95.00th=[ 351], 00:12:41.015 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 453], 99.95th=[ 453], 00:12:41.015 | 99.99th=[ 465] 00:12:41.015 bw ( KiB/s): min= 8192, max= 8192, per=48.97%, avg=8192.00, stdev= 0.00, samples=1 00:12:41.015 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:41.015 lat (usec) : 250=58.58%, 500=41.14%, 750=0.03%, 1000=0.19% 00:12:41.015 lat (msec) : 50=0.06% 00:12:41.015 cpu : usr=3.70%, sys=7.80%, ctx=3591, majf=0, minf=1 00:12:41.015 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:41.015 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.015 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:41.015 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:41.015 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:41.015 00:12:41.015 Run status group 0 (all jobs): 00:12:41.015 READ: bw=9.89MiB/s (10.4MB/s), 89.1KiB/s-6162KiB/s (91.2kB/s-6310kB/s), io=10.2MiB (10.7MB), run=1001-1036msec 00:12:41.015 WRITE: bw=16.3MiB/s (17.1MB/s), 1977KiB/s-8184KiB/s (2024kB/s-8380kB/s), io=16.9MiB (17.7MB), run=1001-1036msec 00:12:41.015 00:12:41.015 Disk stats (read/write): 00:12:41.015 nvme0n1: ios=1018/1024, merge=0/0, ticks=602/206, in_queue=808, util=87.98% 00:12:41.015 nvme0n2: ios=41/512, merge=0/0, ticks=1607/110, in_queue=1717, util=91.16% 00:12:41.015 nvme0n3: ios=74/512, merge=0/0, ticks=794/114, in_queue=908, util=91.76% 00:12:41.015 nvme0n4: ios=1564/1536, merge=0/0, ticks=721/294, in_queue=1015, util=98.63% 00:12:41.015 13:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:41.015 [global] 00:12:41.015 thread=1 00:12:41.015 invalidate=1 00:12:41.015 rw=write 00:12:41.015 time_based=1 00:12:41.015 runtime=1 00:12:41.015 ioengine=libaio 00:12:41.015 direct=1 00:12:41.015 bs=4096 00:12:41.015 iodepth=128 00:12:41.015 norandommap=0 00:12:41.015 numjobs=1 00:12:41.015 00:12:41.015 verify_dump=1 00:12:41.015 verify_backlog=512 00:12:41.015 verify_state_save=0 00:12:41.015 do_verify=1 00:12:41.015 verify=crc32c-intel 00:12:41.015 [job0] 00:12:41.015 filename=/dev/nvme0n1 00:12:41.015 [job1] 00:12:41.015 filename=/dev/nvme0n2 00:12:41.015 [job2] 00:12:41.015 filename=/dev/nvme0n3 00:12:41.015 [job3] 00:12:41.015 filename=/dev/nvme0n4 00:12:41.015 Could not set queue depth (nvme0n1) 00:12:41.015 Could not set queue depth (nvme0n2) 00:12:41.015 Could not set queue depth (nvme0n3) 00:12:41.015 Could not set queue depth (nvme0n4) 00:12:41.273 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.273 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.273 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.273 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:41.273 fio-3.35 00:12:41.273 Starting 4 threads 00:12:42.662 00:12:42.662 job0: (groupid=0, jobs=1): err= 0: pid=1737890: Mon Oct 7 13:23:24 2024 00:12:42.662 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1019msec) 00:12:42.662 slat (usec): min=3, max=13596, avg=113.75, stdev=747.97 00:12:42.662 clat (usec): min=5965, max=29784, avg=14048.75, stdev=4985.58 00:12:42.662 lat (usec): min=5978, max=29803, avg=14162.49, stdev=5048.97 00:12:42.662 clat percentiles (usec): 00:12:42.662 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:12:42.662 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11731], 60.00th=[13566], 00:12:42.662 | 70.00th=[17695], 80.00th=[18744], 90.00th=[20841], 95.00th=[24249], 00:12:42.662 | 99.00th=[27919], 99.50th=[28443], 99.90th=[29492], 99.95th=[29492], 00:12:42.662 | 99.99th=[29754] 00:12:42.662 write: IOPS=3381, BW=13.2MiB/s (13.9MB/s)(13.5MiB/1019msec); 0 zone resets 00:12:42.662 slat (usec): min=4, max=11208, avg=176.98, stdev=818.09 00:12:42.662 clat (usec): min=1378, max=96789, avg=24939.82, stdev=15652.23 00:12:42.662 lat (usec): min=1389, max=96798, avg=25116.80, stdev=15719.33 00:12:42.662 clat percentiles (usec): 00:12:42.662 | 1.00th=[ 3163], 5.00th=[ 8586], 10.00th=[ 9503], 20.00th=[13566], 00:12:42.662 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20579], 60.00th=[21103], 00:12:42.662 | 70.00th=[24249], 80.00th=[35914], 90.00th=[41681], 95.00th=[56361], 00:12:42.662 | 99.00th=[93848], 99.50th=[95945], 99.90th=[96994], 99.95th=[96994], 00:12:42.662 | 99.99th=[96994] 00:12:42.662 bw ( KiB/s): min=13256, max=13296, per=25.42%, avg=13276.00, stdev=28.28, samples=2 00:12:42.663 iops : min= 3314, max= 3324, avg=3319.00, stdev= 7.07, samples=2 00:12:42.663 lat (msec) : 2=0.31%, 4=0.37%, 10=18.99%, 20=37.54%, 50=39.64% 00:12:42.663 lat (msec) : 100=3.15% 00:12:42.663 cpu : usr=5.11%, sys=7.86%, ctx=400, majf=0, minf=1 00:12:42.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:42.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.663 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.663 job1: (groupid=0, jobs=1): err= 0: pid=1737891: Mon Oct 7 13:23:24 2024 00:12:42.663 read: IOPS=2522, BW=9.85MiB/s (10.3MB/s)(10.0MiB/1015msec) 00:12:42.663 slat (usec): min=3, max=9606, avg=111.57, stdev=635.76 00:12:42.663 clat (usec): min=5225, max=29659, avg=12942.89, stdev=5075.85 00:12:42.663 lat (usec): min=5232, max=29676, avg=13054.46, stdev=5120.48 00:12:42.663 clat percentiles (usec): 00:12:42.663 | 1.00th=[ 5538], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[10159], 00:12:42.663 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:12:42.663 | 70.00th=[12911], 80.00th=[15401], 90.00th=[21365], 95.00th=[25297], 00:12:42.663 | 99.00th=[28443], 99.50th=[28967], 99.90th=[29492], 99.95th=[29754], 00:12:42.663 | 99.99th=[29754] 00:12:42.663 write: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.3MiB/1015msec); 0 zone resets 00:12:42.663 slat (usec): min=4, max=12432, avg=236.68, stdev=1061.57 00:12:42.663 clat (usec): min=1275, max=112412, avg=33117.58, stdev=24037.88 00:12:42.663 lat (usec): min=1285, max=112435, avg=33354.26, stdev=24152.54 00:12:42.663 clat percentiles (msec): 00:12:42.663 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 21], 00:12:42.663 | 30.00th=[ 21], 40.00th=[ 22], 50.00th=[ 22], 60.00th=[ 24], 00:12:42.663 | 70.00th=[ 39], 80.00th=[ 52], 90.00th=[ 60], 95.00th=[ 94], 00:12:42.663 | 99.00th=[ 109], 99.50th=[ 111], 99.90th=[ 113], 99.95th=[ 113], 00:12:42.663 | 99.99th=[ 113] 00:12:42.663 bw ( KiB/s): min= 9808, max=12256, per=21.12%, avg=11032.00, stdev=1731.00, samples=2 00:12:42.663 iops : min= 2452, max= 3064, avg=2758.00, stdev=432.75, samples=2 00:12:42.663 lat (msec) : 2=0.50%, 4=0.26%, 10=13.31%, 20=37.72%, 50=36.74% 00:12:42.663 lat (msec) : 100=9.35%, 250=2.13% 00:12:42.663 cpu : usr=4.24%, sys=6.80%, ctx=380, majf=0, minf=2 00:12:42.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:12:42.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.663 issued rwts: total=2560,2886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.663 job2: (groupid=0, jobs=1): err= 0: pid=1737892: Mon Oct 7 13:23:24 2024 00:12:42.663 read: IOPS=3357, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1007msec) 00:12:42.663 slat (usec): min=3, max=11274, avg=138.32, stdev=825.33 00:12:42.663 clat (usec): min=4354, max=58525, avg=14617.53, stdev=7697.38 00:12:42.663 lat (usec): min=5348, max=58547, avg=14755.86, stdev=7804.01 00:12:42.663 clat percentiles (usec): 00:12:42.663 | 1.00th=[ 6194], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[11207], 00:12:42.663 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[12125], 00:12:42.663 | 70.00th=[14353], 80.00th=[16581], 90.00th=[21103], 95.00th=[29230], 00:12:42.663 | 99.00th=[53740], 99.50th=[56886], 99.90th=[58459], 99.95th=[58459], 00:12:42.663 | 99.99th=[58459] 00:12:42.663 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:12:42.663 slat (usec): min=5, max=10039, avg=137.53, stdev=649.85 00:12:42.663 clat (usec): min=3154, max=58553, avg=21784.86, stdev=13633.24 00:12:42.663 lat (usec): min=3163, max=58593, avg=21922.39, stdev=13698.25 00:12:42.663 clat percentiles (usec): 00:12:42.663 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11338], 00:12:42.663 | 30.00th=[11600], 40.00th=[14877], 50.00th=[20317], 60.00th=[21103], 00:12:42.663 | 70.00th=[21627], 80.00th=[27657], 90.00th=[51119], 95.00th=[53216], 00:12:42.663 | 99.00th=[55313], 99.50th=[55313], 99.90th=[57410], 99.95th=[58459], 00:12:42.663 | 99.99th=[58459] 00:12:42.663 bw ( KiB/s): min=13328, max=15344, per=27.45%, avg=14336.00, stdev=1425.53, samples=2 00:12:42.663 iops : min= 3332, max= 3836, avg=3584.00, stdev=356.38, samples=2 00:12:42.663 lat (msec) : 4=0.17%, 10=6.50%, 20=60.82%, 50=26.33%, 100=6.17% 00:12:42.663 cpu : usr=4.97%, sys=8.55%, ctx=385, majf=0, minf=1 00:12:42.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:42.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.663 issued rwts: total=3381,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.663 job3: (groupid=0, jobs=1): err= 0: pid=1737893: Mon Oct 7 13:23:24 2024 00:12:42.663 read: IOPS=3014, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1019msec) 00:12:42.663 slat (usec): min=3, max=18647, avg=177.68, stdev=1103.44 00:12:42.663 clat (msec): min=5, max=110, avg=18.86, stdev=16.38 00:12:42.663 lat (msec): min=5, max=110, avg=19.03, stdev=16.51 00:12:42.663 clat percentiles (msec): 00:12:42.663 | 1.00th=[ 7], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 12], 00:12:42.663 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:12:42.663 | 70.00th=[ 15], 80.00th=[ 18], 90.00th=[ 48], 95.00th=[ 54], 00:12:42.663 | 99.00th=[ 92], 99.50th=[ 103], 99.90th=[ 111], 99.95th=[ 111], 00:12:42.663 | 99.99th=[ 111] 00:12:42.663 write: IOPS=3326, BW=13.0MiB/s (13.6MB/s)(13.2MiB/1019msec); 0 zone resets 00:12:42.663 slat (usec): min=4, max=9438, avg=122.65, stdev=488.16 00:12:42.663 clat (msec): min=3, max=110, avg=20.88, stdev=14.03 00:12:42.663 lat (msec): min=3, max=110, avg=21.01, stdev=14.08 00:12:42.663 clat percentiles (msec): 00:12:42.663 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 10], 20.00th=[ 12], 00:12:42.663 | 30.00th=[ 14], 40.00th=[ 20], 50.00th=[ 21], 60.00th=[ 21], 00:12:42.663 | 70.00th=[ 21], 80.00th=[ 22], 90.00th=[ 35], 95.00th=[ 50], 00:12:42.663 | 99.00th=[ 93], 99.50th=[ 97], 99.90th=[ 99], 99.95th=[ 111], 00:12:42.663 | 99.99th=[ 111] 00:12:42.663 bw ( KiB/s): min=12912, max=13192, per=24.99%, avg=13052.00, stdev=197.99, samples=2 00:12:42.663 iops : min= 3228, max= 3298, avg=3263.00, stdev=49.50, samples=2 00:12:42.663 lat (msec) : 4=0.09%, 10=10.82%, 20=52.24%, 50=30.52%, 100=5.97% 00:12:42.663 lat (msec) : 250=0.36% 00:12:42.663 cpu : usr=4.52%, sys=7.86%, ctx=401, majf=0, minf=1 00:12:42.663 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:42.663 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.663 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.663 issued rwts: total=3072,3390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.663 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.663 00:12:42.663 Run status group 0 (all jobs): 00:12:42.663 READ: bw=46.3MiB/s (48.6MB/s), 9.85MiB/s-13.1MiB/s (10.3MB/s-13.8MB/s), io=47.2MiB (49.5MB), run=1007-1019msec 00:12:42.663 WRITE: bw=51.0MiB/s (53.5MB/s), 11.1MiB/s-13.9MiB/s (11.6MB/s-14.6MB/s), io=52.0MiB (54.5MB), run=1007-1019msec 00:12:42.663 00:12:42.663 Disk stats (read/write): 00:12:42.663 nvme0n1: ios=2597/2871, merge=0/0, ticks=34198/69789, in_queue=103987, util=97.70% 00:12:42.663 nvme0n2: ios=2048/2271, merge=0/0, ticks=25238/76272, in_queue=101510, util=86.67% 00:12:42.663 nvme0n3: ios=2987/3072, merge=0/0, ticks=41363/63265, in_queue=104628, util=97.60% 00:12:42.663 nvme0n4: ios=2607/2823, merge=0/0, ticks=46330/57558, in_queue=103888, util=97.26% 00:12:42.663 13:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:42.663 [global] 00:12:42.663 thread=1 00:12:42.663 invalidate=1 00:12:42.663 rw=randwrite 00:12:42.663 time_based=1 00:12:42.663 runtime=1 00:12:42.663 ioengine=libaio 00:12:42.663 direct=1 00:12:42.663 bs=4096 00:12:42.663 iodepth=128 00:12:42.663 norandommap=0 00:12:42.663 numjobs=1 00:12:42.663 00:12:42.663 verify_dump=1 00:12:42.663 verify_backlog=512 00:12:42.663 verify_state_save=0 00:12:42.663 do_verify=1 00:12:42.663 verify=crc32c-intel 00:12:42.663 [job0] 00:12:42.663 filename=/dev/nvme0n1 00:12:42.663 [job1] 00:12:42.663 filename=/dev/nvme0n2 00:12:42.663 [job2] 00:12:42.663 filename=/dev/nvme0n3 00:12:42.663 [job3] 00:12:42.663 filename=/dev/nvme0n4 00:12:42.663 Could not set queue depth (nvme0n1) 00:12:42.663 Could not set queue depth (nvme0n2) 00:12:42.663 Could not set queue depth (nvme0n3) 00:12:42.663 Could not set queue depth (nvme0n4) 00:12:42.663 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:42.663 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:42.663 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:42.663 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:42.663 fio-3.35 00:12:42.663 Starting 4 threads 00:12:44.043 00:12:44.043 job0: (groupid=0, jobs=1): err= 0: pid=1738116: Mon Oct 7 13:23:25 2024 00:12:44.043 read: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1009msec) 00:12:44.043 slat (usec): min=2, max=11155, avg=144.14, stdev=881.57 00:12:44.043 clat (usec): min=2570, max=36338, avg=17683.88, stdev=4718.07 00:12:44.043 lat (usec): min=8876, max=41471, avg=17828.02, stdev=4809.74 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[ 9765], 5.00th=[12518], 10.00th=[12649], 20.00th=[13566], 00:12:44.043 | 30.00th=[14222], 40.00th=[16319], 50.00th=[17171], 60.00th=[17433], 00:12:44.043 | 70.00th=[19530], 80.00th=[21627], 90.00th=[22938], 95.00th=[26346], 00:12:44.043 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:12:44.043 | 99.99th=[36439] 00:12:44.043 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:12:44.043 slat (usec): min=3, max=7737, avg=191.42, stdev=820.32 00:12:44.043 clat (usec): min=4402, max=66987, avg=25657.74, stdev=13184.11 00:12:44.043 lat (usec): min=4413, max=67001, avg=25849.15, stdev=13266.51 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[11207], 20.00th=[12649], 00:12:44.043 | 30.00th=[16188], 40.00th=[19792], 50.00th=[23725], 60.00th=[28443], 00:12:44.043 | 70.00th=[31589], 80.00th=[34866], 90.00th=[43779], 95.00th=[54789], 00:12:44.043 | 99.00th=[60556], 99.50th=[62653], 99.90th=[66847], 99.95th=[66847], 00:12:44.043 | 99.99th=[66847] 00:12:44.043 bw ( KiB/s): min=10248, max=14299, per=20.16%, avg=12273.50, stdev=2864.49, samples=2 00:12:44.043 iops : min= 2562, max= 3574, avg=3068.00, stdev=715.59, samples=2 00:12:44.043 lat (msec) : 4=0.02%, 10=4.11%, 20=52.45%, 50=40.18%, 100=3.25% 00:12:44.043 cpu : usr=2.38%, sys=4.07%, ctx=323, majf=0, minf=1 00:12:44.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:12:44.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.043 issued rwts: total=2749,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.043 job1: (groupid=0, jobs=1): err= 0: pid=1738117: Mon Oct 7 13:23:25 2024 00:12:44.043 read: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1003msec) 00:12:44.043 slat (usec): min=2, max=16396, avg=94.66, stdev=600.94 00:12:44.043 clat (usec): min=506, max=41248, avg=12310.21, stdev=2985.81 00:12:44.043 lat (usec): min=2376, max=41253, avg=12404.87, stdev=3015.06 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[ 4424], 5.00th=[ 8848], 10.00th=[ 9765], 20.00th=[10683], 00:12:44.043 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11600], 60.00th=[12256], 00:12:44.043 | 70.00th=[12911], 80.00th=[14091], 90.00th=[14877], 95.00th=[19006], 00:12:44.043 | 99.00th=[22676], 99.50th=[22676], 99.90th=[33817], 99.95th=[33817], 00:12:44.043 | 99.99th=[41157] 00:12:44.043 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:12:44.043 slat (usec): min=3, max=18655, avg=93.13, stdev=759.49 00:12:44.043 clat (usec): min=4150, max=38937, avg=13137.33, stdev=4278.63 00:12:44.043 lat (usec): min=4157, max=38957, avg=13230.45, stdev=4348.10 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8586], 20.00th=[10290], 00:12:44.043 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:12:44.043 | 70.00th=[13566], 80.00th=[17171], 90.00th=[20317], 95.00th=[20841], 00:12:44.043 | 99.00th=[25297], 99.50th=[25297], 99.90th=[27657], 99.95th=[38536], 00:12:44.043 | 99.99th=[39060] 00:12:44.043 bw ( KiB/s): min=20032, max=20928, per=33.63%, avg=20480.00, stdev=633.57, samples=2 00:12:44.043 iops : min= 5008, max= 5232, avg=5120.00, stdev=158.39, samples=2 00:12:44.043 lat (usec) : 750=0.01% 00:12:44.043 lat (msec) : 4=0.28%, 10=13.44%, 20=76.92%, 50=9.35% 00:12:44.043 cpu : usr=3.69%, sys=7.19%, ctx=316, majf=0, minf=1 00:12:44.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:44.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.043 issued rwts: total=4868,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.043 job2: (groupid=0, jobs=1): err= 0: pid=1738118: Mon Oct 7 13:23:25 2024 00:12:44.043 read: IOPS=2055, BW=8222KiB/s (8419kB/s)(8296KiB/1009msec) 00:12:44.043 slat (usec): min=3, max=15769, avg=217.92, stdev=1147.30 00:12:44.043 clat (usec): min=4178, max=54390, avg=28620.80, stdev=10162.40 00:12:44.043 lat (usec): min=9135, max=60587, avg=28838.71, stdev=10243.86 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[12256], 5.00th=[17957], 10.00th=[18220], 20.00th=[18482], 00:12:44.043 | 30.00th=[19006], 40.00th=[22938], 50.00th=[26084], 60.00th=[31327], 00:12:44.043 | 70.00th=[35914], 80.00th=[39060], 90.00th=[42730], 95.00th=[46400], 00:12:44.043 | 99.00th=[51643], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:12:44.043 | 99.99th=[54264] 00:12:44.043 write: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(10.0MiB/1009msec); 0 zone resets 00:12:44.043 slat (usec): min=5, max=9328, avg=205.64, stdev=929.49 00:12:44.043 clat (usec): min=15489, max=58926, avg=26393.90, stdev=10744.30 00:12:44.043 lat (usec): min=15497, max=58936, avg=26599.54, stdev=10829.95 00:12:44.043 clat percentiles (usec): 00:12:44.043 | 1.00th=[15533], 5.00th=[15795], 10.00th=[16319], 20.00th=[18744], 00:12:44.043 | 30.00th=[19792], 40.00th=[22152], 50.00th=[23987], 60.00th=[24511], 00:12:44.043 | 70.00th=[25560], 80.00th=[32113], 90.00th=[45876], 95.00th=[53216], 00:12:44.043 | 99.00th=[58459], 99.50th=[58983], 99.90th=[58983], 99.95th=[58983], 00:12:44.043 | 99.99th=[58983] 00:12:44.043 bw ( KiB/s): min= 8064, max=11576, per=16.13%, avg=9820.00, stdev=2483.36, samples=2 00:12:44.043 iops : min= 2016, max= 2894, avg=2455.00, stdev=620.84, samples=2 00:12:44.043 lat (msec) : 10=0.19%, 20=32.20%, 50=62.73%, 100=4.88% 00:12:44.043 cpu : usr=3.77%, sys=4.96%, ctx=257, majf=0, minf=1 00:12:44.043 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:44.043 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.043 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.043 issued rwts: total=2074,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.043 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.043 job3: (groupid=0, jobs=1): err= 0: pid=1738119: Mon Oct 7 13:23:25 2024 00:12:44.043 read: IOPS=4188, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1008msec) 00:12:44.043 slat (usec): min=2, max=14373, avg=102.60, stdev=656.73 00:12:44.043 clat (usec): min=946, max=73601, avg=14368.33, stdev=8088.85 00:12:44.043 lat (usec): min=1461, max=73605, avg=14470.93, stdev=8098.74 00:12:44.043 clat percentiles (usec): 00:12:44.044 | 1.00th=[ 4146], 5.00th=[ 9241], 10.00th=[10290], 20.00th=[10683], 00:12:44.044 | 30.00th=[11076], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:12:44.044 | 70.00th=[13566], 80.00th=[14484], 90.00th=[16909], 95.00th=[27395], 00:12:44.044 | 99.00th=[61604], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:12:44.044 | 99.99th=[73925] 00:12:44.044 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:12:44.044 slat (usec): min=3, max=27534, avg=117.46, stdev=920.20 00:12:44.044 clat (usec): min=3326, max=57887, avg=14576.33, stdev=5525.69 00:12:44.044 lat (usec): min=3333, max=73621, avg=14693.79, stdev=5650.66 00:12:44.044 clat percentiles (usec): 00:12:44.044 | 1.00th=[ 5932], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11207], 00:12:44.044 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13698], 00:12:44.044 | 70.00th=[14484], 80.00th=[16712], 90.00th=[20579], 95.00th=[20841], 00:12:44.044 | 99.00th=[37487], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:12:44.044 | 99.99th=[57934] 00:12:44.044 bw ( KiB/s): min=15824, max=21024, per=30.26%, avg=18424.00, stdev=3676.96, samples=2 00:12:44.044 iops : min= 3956, max= 5256, avg=4606.00, stdev=919.24, samples=2 00:12:44.044 lat (usec) : 1000=0.01% 00:12:44.044 lat (msec) : 2=0.10%, 4=0.31%, 10=8.32%, 20=79.50%, 50=10.94% 00:12:44.044 lat (msec) : 100=0.82% 00:12:44.044 cpu : usr=3.57%, sys=6.26%, ctx=400, majf=0, minf=1 00:12:44.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:44.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:44.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:44.044 issued rwts: total=4222,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:44.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:44.044 00:12:44.044 Run status group 0 (all jobs): 00:12:44.044 READ: bw=53.9MiB/s (56.5MB/s), 8222KiB/s-19.0MiB/s (8419kB/s-19.9MB/s), io=54.3MiB (57.0MB), run=1003-1009msec 00:12:44.044 WRITE: bw=59.5MiB/s (62.4MB/s), 9.91MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=60.0MiB (62.9MB), run=1003-1009msec 00:12:44.044 00:12:44.044 Disk stats (read/write): 00:12:44.044 nvme0n1: ios=2611/2639, merge=0/0, ticks=22910/28835, in_queue=51745, util=94.09% 00:12:44.044 nvme0n2: ios=4142/4343, merge=0/0, ticks=24110/28318, in_queue=52428, util=97.56% 00:12:44.044 nvme0n3: ios=2101/2071, merge=0/0, ticks=19424/15429, in_queue=34853, util=98.23% 00:12:44.044 nvme0n4: ios=3600/3584, merge=0/0, ticks=19056/26164, in_queue=45220, util=97.27% 00:12:44.044 13:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:44.044 13:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1738255 00:12:44.044 13:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:44.044 13:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:44.044 [global] 00:12:44.044 thread=1 00:12:44.044 invalidate=1 00:12:44.044 rw=read 00:12:44.044 time_based=1 00:12:44.044 runtime=10 00:12:44.044 ioengine=libaio 00:12:44.044 direct=1 00:12:44.044 bs=4096 00:12:44.044 iodepth=1 00:12:44.044 norandommap=1 00:12:44.044 numjobs=1 00:12:44.044 00:12:44.044 [job0] 00:12:44.044 filename=/dev/nvme0n1 00:12:44.044 [job1] 00:12:44.044 filename=/dev/nvme0n2 00:12:44.044 [job2] 00:12:44.044 filename=/dev/nvme0n3 00:12:44.044 [job3] 00:12:44.044 filename=/dev/nvme0n4 00:12:44.044 Could not set queue depth (nvme0n1) 00:12:44.044 Could not set queue depth (nvme0n2) 00:12:44.044 Could not set queue depth (nvme0n3) 00:12:44.044 Could not set queue depth (nvme0n4) 00:12:44.044 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.044 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.044 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.044 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:44.044 fio-3.35 00:12:44.044 Starting 4 threads 00:12:47.332 13:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:47.332 13:23:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:47.332 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46755840, buflen=4096 00:12:47.332 fio: pid=1738460, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:47.590 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.590 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:47.590 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47824896, buflen=4096 00:12:47.590 fio: pid=1738459, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:47.849 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:47.849 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:47.849 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=13897728, buflen=4096 00:12:47.849 fio: pid=1738457, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:48.107 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:48.107 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:48.107 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=2551808, buflen=4096 00:12:48.107 fio: pid=1738458, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:12:48.107 00:12:48.107 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1738457: Mon Oct 7 13:23:29 2024 00:12:48.107 read: IOPS=975, BW=3900KiB/s (3994kB/s)(13.3MiB/3480msec) 00:12:48.107 slat (usec): min=4, max=6880, avg=15.31, stdev=135.67 00:12:48.107 clat (usec): min=189, max=42023, avg=1000.31, stdev=5455.35 00:12:48.107 lat (usec): min=196, max=47981, avg=1015.61, stdev=5481.13 00:12:48.107 clat percentiles (usec): 00:12:48.107 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 20.00th=[ 231], 00:12:48.107 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 253], 00:12:48.107 | 70.00th=[ 262], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 351], 00:12:48.107 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:12:48.107 | 99.99th=[42206] 00:12:48.107 bw ( KiB/s): min= 104, max=14184, per=15.72%, avg=4509.33, stdev=6166.80, samples=6 00:12:48.108 iops : min= 26, max= 3546, avg=1127.33, stdev=1541.70, samples=6 00:12:48.108 lat (usec) : 250=55.51%, 500=42.46%, 750=0.18% 00:12:48.108 lat (msec) : 50=1.83% 00:12:48.108 cpu : usr=0.55%, sys=1.35%, ctx=3398, majf=0, minf=1 00:12:48.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 issued rwts: total=3394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:48.108 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1738458: Mon Oct 7 13:23:29 2024 00:12:48.108 read: IOPS=165, BW=659KiB/s (675kB/s)(2492KiB/3779msec) 00:12:48.108 slat (usec): min=6, max=22976, avg=94.61, stdev=1080.12 00:12:48.108 clat (usec): min=214, max=42043, avg=5964.93, stdev=14022.00 00:12:48.108 lat (usec): min=230, max=64989, avg=6048.98, stdev=14174.78 00:12:48.108 clat percentiles (usec): 00:12:48.108 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 249], 20.00th=[ 289], 00:12:48.108 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 343], 60.00th=[ 367], 00:12:48.108 | 70.00th=[ 437], 80.00th=[ 545], 90.00th=[40633], 95.00th=[41157], 00:12:48.108 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:48.108 | 99.99th=[42206] 00:12:48.108 bw ( KiB/s): min= 232, max= 1648, per=2.40%, avg=689.29, stdev=489.61, samples=7 00:12:48.108 iops : min= 58, max= 412, avg=172.29, stdev=122.42, samples=7 00:12:48.108 lat (usec) : 250=10.90%, 500=64.42%, 750=10.26%, 1000=0.16% 00:12:48.108 lat (msec) : 2=0.16%, 4=0.16%, 50=13.78% 00:12:48.108 cpu : usr=0.32%, sys=0.50%, ctx=629, majf=0, minf=2 00:12:48.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:48.108 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1738459: Mon Oct 7 13:23:29 2024 00:12:48.108 read: IOPS=3647, BW=14.2MiB/s (14.9MB/s)(45.6MiB/3201msec) 00:12:48.108 slat (nsec): min=4461, max=69240, avg=12260.29, stdev=6530.88 00:12:48.108 clat (usec): min=179, max=3070, avg=256.73, stdev=62.42 00:12:48.108 lat (usec): min=185, max=3083, avg=268.99, stdev=66.11 00:12:48.108 clat percentiles (usec): 00:12:48.108 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 221], 00:12:48.108 | 30.00th=[ 229], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 255], 00:12:48.108 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 347], 00:12:48.108 | 99.00th=[ 529], 99.50th=[ 545], 99.90th=[ 586], 99.95th=[ 594], 00:12:48.108 | 99.99th=[ 627] 00:12:48.108 bw ( KiB/s): min=13024, max=16232, per=50.55%, avg=14505.33, stdev=1285.62, samples=6 00:12:48.108 iops : min= 3256, max= 4058, avg=3626.33, stdev=321.41, samples=6 00:12:48.108 lat (usec) : 250=54.56%, 500=43.87%, 750=1.55% 00:12:48.108 lat (msec) : 4=0.01% 00:12:48.108 cpu : usr=3.06%, sys=6.56%, ctx=11677, majf=0, minf=2 00:12:48.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 issued rwts: total=11677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:48.108 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1738460: Mon Oct 7 13:23:29 2024 00:12:48.108 read: IOPS=3921, BW=15.3MiB/s (16.1MB/s)(44.6MiB/2911msec) 00:12:48.108 slat (nsec): min=4300, max=71053, avg=10858.02, stdev=6262.84 00:12:48.108 clat (usec): min=182, max=762, avg=239.58, stdev=38.42 00:12:48.108 lat (usec): min=188, max=774, avg=250.43, stdev=40.77 00:12:48.108 clat percentiles (usec): 00:12:48.108 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:12:48.108 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:12:48.108 | 70.00th=[ 243], 80.00th=[ 265], 90.00th=[ 289], 95.00th=[ 326], 00:12:48.108 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 408], 99.95th=[ 494], 00:12:48.108 | 99.99th=[ 562] 00:12:48.108 bw ( KiB/s): min=12888, max=17256, per=56.28%, avg=16147.20, stdev=1832.26, samples=5 00:12:48.108 iops : min= 3222, max= 4314, avg=4036.80, stdev=458.06, samples=5 00:12:48.108 lat (usec) : 250=73.76%, 500=26.19%, 750=0.04%, 1000=0.01% 00:12:48.108 cpu : usr=1.79%, sys=5.02%, ctx=11416, majf=0, minf=1 00:12:48.108 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:48.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:48.108 issued rwts: total=11416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:48.108 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:48.108 00:12:48.108 Run status group 0 (all jobs): 00:12:48.108 READ: bw=28.0MiB/s (29.4MB/s), 659KiB/s-15.3MiB/s (675kB/s-16.1MB/s), io=106MiB (111MB), run=2911-3779msec 00:12:48.108 00:12:48.108 Disk stats (read/write): 00:12:48.108 nvme0n1: ios=3427/0, merge=0/0, ticks=3431/0, in_queue=3431, util=99.40% 00:12:48.108 nvme0n2: ios=639/0, merge=0/0, ticks=4047/0, in_queue=4047, util=98.87% 00:12:48.108 nvme0n3: ios=11330/0, merge=0/0, ticks=2737/0, in_queue=2737, util=96.79% 00:12:48.108 nvme0n4: ios=11275/0, merge=0/0, ticks=2567/0, in_queue=2567, util=96.78% 00:12:48.367 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:48.367 13:23:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:48.625 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:48.625 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:48.882 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:48.882 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:49.140 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:49.140 13:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:49.398 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:49.398 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1738255 00:12:49.398 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:49.398 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:49.656 nvmf hotplug test: fio failed as expected 00:12:49.656 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:49.915 rmmod nvme_tcp 00:12:49.915 rmmod nvme_fabrics 00:12:49.915 rmmod nvme_keyring 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@515 -- # '[' -n 1736283 ']' 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # killprocess 1736283 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 1736283 ']' 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 1736283 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1736283 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1736283' 00:12:49.915 killing process with pid 1736283 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 1736283 00:12:49.915 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 1736283 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-save 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@789 -- # iptables-restore 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:50.484 13:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.392 13:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:52.392 00:12:52.392 real 0m24.195s 00:12:52.392 user 1m24.182s 00:12:52.392 sys 0m7.592s 00:12:52.392 13:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.392 13:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.392 ************************************ 00:12:52.392 END TEST nvmf_fio_target 00:12:52.392 ************************************ 00:12:52.393 13:23:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:52.393 13:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.393 13:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.393 13:23:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:52.393 ************************************ 00:12:52.393 START TEST nvmf_bdevio 00:12:52.393 ************************************ 00:12:52.393 13:23:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:52.393 * Looking for test storage... 00:12:52.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.393 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:52.393 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:12:52.393 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.652 --rc genhtml_branch_coverage=1 00:12:52.652 --rc genhtml_function_coverage=1 00:12:52.652 --rc genhtml_legend=1 00:12:52.652 --rc geninfo_all_blocks=1 00:12:52.652 --rc geninfo_unexecuted_blocks=1 00:12:52.652 00:12:52.652 ' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.652 --rc genhtml_branch_coverage=1 00:12:52.652 --rc genhtml_function_coverage=1 00:12:52.652 --rc genhtml_legend=1 00:12:52.652 --rc geninfo_all_blocks=1 00:12:52.652 --rc geninfo_unexecuted_blocks=1 00:12:52.652 00:12:52.652 ' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.652 --rc genhtml_branch_coverage=1 00:12:52.652 --rc genhtml_function_coverage=1 00:12:52.652 --rc genhtml_legend=1 00:12:52.652 --rc geninfo_all_blocks=1 00:12:52.652 --rc geninfo_unexecuted_blocks=1 00:12:52.652 00:12:52.652 ' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:52.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.652 --rc genhtml_branch_coverage=1 00:12:52.652 --rc genhtml_function_coverage=1 00:12:52.652 --rc genhtml_legend=1 00:12:52.652 --rc geninfo_all_blocks=1 00:12:52.652 --rc geninfo_unexecuted_blocks=1 00:12:52.652 00:12:52.652 ' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.652 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:52.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:12:52.653 13:23:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:12:54.559 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:12:54.560 Found 0000:09:00.0 (0x8086 - 0x1592) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:12:54.560 Found 0000:09:00.1 (0x8086 - 0x1592) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:54.560 Found net devices under 0000:09:00.0: cvl_0_0 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ up == up ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:54.560 Found net devices under 0000:09:00.1: cvl_0_1 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # is_hw=yes 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.560 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:54.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:12:54.819 00:12:54.819 --- 10.0.0.2 ping statistics --- 00:12:54.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.819 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:12:54.819 00:12:54.819 --- 10.0.0.1 ping statistics --- 00:12:54.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.819 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # return 0 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # nvmfpid=1740971 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # waitforlisten 1740971 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 1740971 ']' 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.819 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:54.819 [2024-10-07 13:23:36.420655] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:54.819 [2024-10-07 13:23:36.420744] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.819 [2024-10-07 13:23:36.480439] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.077 [2024-10-07 13:23:36.584732] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.077 [2024-10-07 13:23:36.584790] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.077 [2024-10-07 13:23:36.584818] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.077 [2024-10-07 13:23:36.584830] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.077 [2024-10-07 13:23:36.584839] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.077 [2024-10-07 13:23:36.586449] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:12:55.077 [2024-10-07 13:23:36.586552] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:12:55.077 [2024-10-07 13:23:36.586697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.077 [2024-10-07 13:23:36.586664] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.077 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.078 [2024-10-07 13:23:36.737170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.078 Malloc0 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.078 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:55.078 [2024-10-07 13:23:36.790130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # config=() 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # local subsystem config 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:12:55.338 { 00:12:55.338 "params": { 00:12:55.338 "name": "Nvme$subsystem", 00:12:55.338 "trtype": "$TEST_TRANSPORT", 00:12:55.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:55.338 "adrfam": "ipv4", 00:12:55.338 "trsvcid": "$NVMF_PORT", 00:12:55.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:55.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:55.338 "hdgst": ${hdgst:-false}, 00:12:55.338 "ddgst": ${ddgst:-false} 00:12:55.338 }, 00:12:55.338 "method": "bdev_nvme_attach_controller" 00:12:55.338 } 00:12:55.338 EOF 00:12:55.338 )") 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # cat 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # jq . 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@583 -- # IFS=, 00:12:55.338 13:23:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:12:55.338 "params": { 00:12:55.338 "name": "Nvme1", 00:12:55.338 "trtype": "tcp", 00:12:55.338 "traddr": "10.0.0.2", 00:12:55.338 "adrfam": "ipv4", 00:12:55.338 "trsvcid": "4420", 00:12:55.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:55.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:55.338 "hdgst": false, 00:12:55.338 "ddgst": false 00:12:55.338 }, 00:12:55.338 "method": "bdev_nvme_attach_controller" 00:12:55.338 }' 00:12:55.338 [2024-10-07 13:23:36.841701] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:12:55.338 [2024-10-07 13:23:36.841777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1741005 ] 00:12:55.338 [2024-10-07 13:23:36.902783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:55.338 [2024-10-07 13:23:37.019209] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.338 [2024-10-07 13:23:37.019265] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.338 [2024-10-07 13:23:37.019268] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.906 I/O targets: 00:12:55.906 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:55.906 00:12:55.906 00:12:55.906 CUnit - A unit testing framework for C - Version 2.1-3 00:12:55.906 http://cunit.sourceforge.net/ 00:12:55.906 00:12:55.906 00:12:55.906 Suite: bdevio tests on: Nvme1n1 00:12:55.906 Test: blockdev write read block ...passed 00:12:55.906 Test: blockdev write zeroes read block ...passed 00:12:55.906 Test: blockdev write zeroes read no split ...passed 00:12:55.906 Test: blockdev write zeroes read split ...passed 00:12:55.906 Test: blockdev write zeroes read split partial ...passed 00:12:55.906 Test: blockdev reset ...[2024-10-07 13:23:37.475822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:12:55.906 [2024-10-07 13:23:37.475930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7bc130 (9): Bad file descriptor 00:12:55.906 [2024-10-07 13:23:37.492209] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:12:55.906 passed 00:12:55.907 Test: blockdev write read 8 blocks ...passed 00:12:55.907 Test: blockdev write read size > 128k ...passed 00:12:55.907 Test: blockdev write read invalid size ...passed 00:12:55.907 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:55.907 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:55.907 Test: blockdev write read max offset ...passed 00:12:56.165 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:56.165 Test: blockdev writev readv 8 blocks ...passed 00:12:56.165 Test: blockdev writev readv 30 x 1block ...passed 00:12:56.165 Test: blockdev writev readv block ...passed 00:12:56.165 Test: blockdev writev readv size > 128k ...passed 00:12:56.165 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:56.165 Test: blockdev comparev and writev ...[2024-10-07 13:23:37.744101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.165 [2024-10-07 13:23:37.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:56.165 [2024-10-07 13:23:37.744163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.165 [2024-10-07 13:23:37.744181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:56.165 [2024-10-07 13:23:37.744507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.165 [2024-10-07 13:23:37.744532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.744555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.166 [2024-10-07 13:23:37.744572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.744902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.166 [2024-10-07 13:23:37.744927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.744959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.166 [2024-10-07 13:23:37.744976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.166 [2024-10-07 13:23:37.745333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.745355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:56.166 [2024-10-07 13:23:37.745371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:56.166 passed 00:12:56.166 Test: blockdev nvme passthru rw ...passed 00:12:56.166 Test: blockdev nvme passthru vendor specific ...[2024-10-07 13:23:37.826918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.166 [2024-10-07 13:23:37.826947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.827080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.166 [2024-10-07 13:23:37.827104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.827231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.166 [2024-10-07 13:23:37.827254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:56.166 [2024-10-07 13:23:37.827384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:56.166 [2024-10-07 13:23:37.827408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:56.166 passed 00:12:56.166 Test: blockdev nvme admin passthru ...passed 00:12:56.424 Test: blockdev copy ...passed 00:12:56.424 00:12:56.424 Run Summary: Type Total Ran Passed Failed Inactive 00:12:56.424 suites 1 1 n/a 0 0 00:12:56.424 tests 23 23 23 0 0 00:12:56.424 asserts 152 152 152 0 n/a 00:12:56.424 00:12:56.424 Elapsed time = 1.046 seconds 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # nvmfcleanup 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:56.424 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:56.685 rmmod nvme_tcp 00:12:56.685 rmmod nvme_fabrics 00:12:56.685 rmmod nvme_keyring 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@515 -- # '[' -n 1740971 ']' 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # killprocess 1740971 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 1740971 ']' 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 1740971 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1740971 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1740971' 00:12:56.685 killing process with pid 1740971 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 1740971 00:12:56.685 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 1740971 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-save 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@789 -- # iptables-restore 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.943 13:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:59.485 00:12:59.485 real 0m6.591s 00:12:59.485 user 0m10.929s 00:12:59.485 sys 0m2.143s 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:59.485 ************************************ 00:12:59.485 END TEST nvmf_bdevio 00:12:59.485 ************************************ 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:59.485 00:12:59.485 real 3m56.527s 00:12:59.485 user 10m19.450s 00:12:59.485 sys 1m7.348s 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:59.485 ************************************ 00:12:59.485 END TEST nvmf_target_core 00:12:59.485 ************************************ 00:12:59.485 13:23:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:59.485 13:23:40 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.485 13:23:40 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.485 13:23:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:59.485 ************************************ 00:12:59.485 START TEST nvmf_target_extra 00:12:59.485 ************************************ 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:59.485 * Looking for test storage... 00:12:59.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.485 --rc genhtml_branch_coverage=1 00:12:59.485 --rc genhtml_function_coverage=1 00:12:59.485 --rc genhtml_legend=1 00:12:59.485 --rc geninfo_all_blocks=1 00:12:59.485 --rc geninfo_unexecuted_blocks=1 00:12:59.485 00:12:59.485 ' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.485 --rc genhtml_branch_coverage=1 00:12:59.485 --rc genhtml_function_coverage=1 00:12:59.485 --rc genhtml_legend=1 00:12:59.485 --rc geninfo_all_blocks=1 00:12:59.485 --rc geninfo_unexecuted_blocks=1 00:12:59.485 00:12:59.485 ' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.485 --rc genhtml_branch_coverage=1 00:12:59.485 --rc genhtml_function_coverage=1 00:12:59.485 --rc genhtml_legend=1 00:12:59.485 --rc geninfo_all_blocks=1 00:12:59.485 --rc geninfo_unexecuted_blocks=1 00:12:59.485 00:12:59.485 ' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:59.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.485 --rc genhtml_branch_coverage=1 00:12:59.485 --rc genhtml_function_coverage=1 00:12:59.485 --rc genhtml_legend=1 00:12:59.485 --rc geninfo_all_blocks=1 00:12:59.485 --rc geninfo_unexecuted_blocks=1 00:12:59.485 00:12:59.485 ' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:12:59.485 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:59.486 ************************************ 00:12:59.486 START TEST nvmf_example 00:12:59.486 ************************************ 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:59.486 * Looking for test storage... 00:12:59.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lcov --version 00:12:59.486 13:23:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.486 --rc genhtml_branch_coverage=1 00:12:59.486 --rc genhtml_function_coverage=1 00:12:59.486 --rc genhtml_legend=1 00:12:59.486 --rc geninfo_all_blocks=1 00:12:59.486 --rc geninfo_unexecuted_blocks=1 00:12:59.486 00:12:59.486 ' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.486 --rc genhtml_branch_coverage=1 00:12:59.486 --rc genhtml_function_coverage=1 00:12:59.486 --rc genhtml_legend=1 00:12:59.486 --rc geninfo_all_blocks=1 00:12:59.486 --rc geninfo_unexecuted_blocks=1 00:12:59.486 00:12:59.486 ' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.486 --rc genhtml_branch_coverage=1 00:12:59.486 --rc genhtml_function_coverage=1 00:12:59.486 --rc genhtml_legend=1 00:12:59.486 --rc geninfo_all_blocks=1 00:12:59.486 --rc geninfo_unexecuted_blocks=1 00:12:59.486 00:12:59.486 ' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:59.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:59.486 --rc genhtml_branch_coverage=1 00:12:59.486 --rc genhtml_function_coverage=1 00:12:59.486 --rc genhtml_legend=1 00:12:59.486 --rc geninfo_all_blocks=1 00:12:59.486 --rc geninfo_unexecuted_blocks=1 00:12:59.486 00:12:59.486 ' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:12:59.486 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:59.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # prepare_net_devs 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@436 -- # local -g is_hw=no 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # remove_spdk_ns 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:12:59.487 13:23:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:01.450 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:01.450 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:01.450 Found net devices under 0000:09:00.0: cvl_0_0 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:01.450 Found net devices under 0000:09:00.1: cvl_0_1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # is_hw=yes 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.450 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.451 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:01.451 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:01.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:13:01.709 00:13:01.709 --- 10.0.0.2 ping statistics --- 00:13:01.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.709 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:13:01.709 00:13:01.709 --- 10.0.0.1 ping statistics --- 00:13:01.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.709 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # return 0 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1743155 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1743155 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 1743155 ']' 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.709 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:01.968 13:23:43 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:14.180 Initializing NVMe Controllers 00:13:14.180 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:14.180 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:14.180 Initialization complete. Launching workers. 00:13:14.180 ======================================================== 00:13:14.180 Latency(us) 00:13:14.180 Device Information : IOPS MiB/s Average min max 00:13:14.180 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14488.50 56.60 4416.73 910.99 15931.44 00:13:14.180 ======================================================== 00:13:14.180 Total : 14488.50 56.60 4416.73 910.99 15931.44 00:13:14.180 00:13:14.180 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:14.180 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:14.180 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:14.180 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:14.180 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:14.181 rmmod nvme_tcp 00:13:14.181 rmmod nvme_fabrics 00:13:14.181 rmmod nvme_keyring 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@515 -- # '[' -n 1743155 ']' 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # killprocess 1743155 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 1743155 ']' 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 1743155 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1743155 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1743155' 00:13:14.181 killing process with pid 1743155 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 1743155 00:13:14.181 13:23:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 1743155 00:13:14.181 nvmf threads initialize successfully 00:13:14.181 bdev subsystem init successfully 00:13:14.181 created a nvmf target service 00:13:14.181 create targets's poll groups done 00:13:14.181 all subsystems of target started 00:13:14.181 nvmf target is running 00:13:14.181 all subsystems of target stopped 00:13:14.181 destroy targets's poll groups done 00:13:14.181 destroyed the nvmf target service 00:13:14.181 bdev subsystem finish successfully 00:13:14.181 nvmf threads destroy successfully 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-save 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@789 -- # iptables-restore 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:14.181 13:23:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.749 00:13:14.749 real 0m15.399s 00:13:14.749 user 0m42.530s 00:13:14.749 sys 0m3.282s 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.749 ************************************ 00:13:14.749 END TEST nvmf_example 00:13:14.749 ************************************ 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.749 ************************************ 00:13:14.749 START TEST nvmf_filesystem 00:13:14.749 ************************************ 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:14.749 * Looking for test storage... 00:13:14.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:14.749 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:15.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.015 --rc genhtml_branch_coverage=1 00:13:15.015 --rc genhtml_function_coverage=1 00:13:15.015 --rc genhtml_legend=1 00:13:15.015 --rc geninfo_all_blocks=1 00:13:15.015 --rc geninfo_unexecuted_blocks=1 00:13:15.015 00:13:15.015 ' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:15.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.015 --rc genhtml_branch_coverage=1 00:13:15.015 --rc genhtml_function_coverage=1 00:13:15.015 --rc genhtml_legend=1 00:13:15.015 --rc geninfo_all_blocks=1 00:13:15.015 --rc geninfo_unexecuted_blocks=1 00:13:15.015 00:13:15.015 ' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:15.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.015 --rc genhtml_branch_coverage=1 00:13:15.015 --rc genhtml_function_coverage=1 00:13:15.015 --rc genhtml_legend=1 00:13:15.015 --rc geninfo_all_blocks=1 00:13:15.015 --rc geninfo_unexecuted_blocks=1 00:13:15.015 00:13:15.015 ' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:15.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.015 --rc genhtml_branch_coverage=1 00:13:15.015 --rc genhtml_function_coverage=1 00:13:15.015 --rc genhtml_legend=1 00:13:15.015 --rc geninfo_all_blocks=1 00:13:15.015 --rc geninfo_unexecuted_blocks=1 00:13:15.015 00:13:15.015 ' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:13:15.015 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_FC=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_URING=n 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:15.016 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:15.016 #define SPDK_CONFIG_H 00:13:15.016 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:15.016 #define SPDK_CONFIG_APPS 1 00:13:15.016 #define SPDK_CONFIG_ARCH native 00:13:15.016 #undef SPDK_CONFIG_ASAN 00:13:15.016 #undef SPDK_CONFIG_AVAHI 00:13:15.016 #undef SPDK_CONFIG_CET 00:13:15.016 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:15.016 #define SPDK_CONFIG_COVERAGE 1 00:13:15.016 #define SPDK_CONFIG_CROSS_PREFIX 00:13:15.016 #undef SPDK_CONFIG_CRYPTO 00:13:15.016 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:15.016 #undef SPDK_CONFIG_CUSTOMOCF 00:13:15.016 #undef SPDK_CONFIG_DAOS 00:13:15.016 #define SPDK_CONFIG_DAOS_DIR 00:13:15.016 #define SPDK_CONFIG_DEBUG 1 00:13:15.016 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:15.016 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:15.016 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:15.016 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:15.016 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:15.016 #undef SPDK_CONFIG_DPDK_UADK 00:13:15.016 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:15.016 #define SPDK_CONFIG_EXAMPLES 1 00:13:15.016 #undef SPDK_CONFIG_FC 00:13:15.016 #define SPDK_CONFIG_FC_PATH 00:13:15.016 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:15.016 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:15.016 #define SPDK_CONFIG_FSDEV 1 00:13:15.016 #undef SPDK_CONFIG_FUSE 00:13:15.016 #undef SPDK_CONFIG_FUZZER 00:13:15.016 #define SPDK_CONFIG_FUZZER_LIB 00:13:15.016 #undef SPDK_CONFIG_GOLANG 00:13:15.016 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:15.016 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:15.016 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:15.016 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:15.016 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:15.016 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:15.016 #undef SPDK_CONFIG_HAVE_LZ4 00:13:15.016 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:15.016 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:15.016 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:15.016 #define SPDK_CONFIG_IDXD 1 00:13:15.016 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:15.016 #undef SPDK_CONFIG_IPSEC_MB 00:13:15.016 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:15.016 #define SPDK_CONFIG_ISAL 1 00:13:15.016 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:15.016 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:15.016 #define SPDK_CONFIG_LIBDIR 00:13:15.016 #undef SPDK_CONFIG_LTO 00:13:15.016 #define SPDK_CONFIG_MAX_LCORES 128 00:13:15.016 #define SPDK_CONFIG_NVME_CUSE 1 00:13:15.016 #undef SPDK_CONFIG_OCF 00:13:15.016 #define SPDK_CONFIG_OCF_PATH 00:13:15.016 #define SPDK_CONFIG_OPENSSL_PATH 00:13:15.016 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:15.016 #define SPDK_CONFIG_PGO_DIR 00:13:15.016 #undef SPDK_CONFIG_PGO_USE 00:13:15.016 #define SPDK_CONFIG_PREFIX /usr/local 00:13:15.016 #undef SPDK_CONFIG_RAID5F 00:13:15.016 #undef SPDK_CONFIG_RBD 00:13:15.016 #define SPDK_CONFIG_RDMA 1 00:13:15.016 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:15.016 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:15.016 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:15.016 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:15.016 #define SPDK_CONFIG_SHARED 1 00:13:15.016 #undef SPDK_CONFIG_SMA 00:13:15.016 #define SPDK_CONFIG_TESTS 1 00:13:15.016 #undef SPDK_CONFIG_TSAN 00:13:15.016 #define SPDK_CONFIG_UBLK 1 00:13:15.016 #define SPDK_CONFIG_UBSAN 1 00:13:15.016 #undef SPDK_CONFIG_UNIT_TESTS 00:13:15.016 #undef SPDK_CONFIG_URING 00:13:15.016 #define SPDK_CONFIG_URING_PATH 00:13:15.016 #undef SPDK_CONFIG_URING_ZNS 00:13:15.016 #undef SPDK_CONFIG_USDT 00:13:15.016 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:15.016 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:15.016 #define SPDK_CONFIG_VFIO_USER 1 00:13:15.016 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:15.016 #define SPDK_CONFIG_VHOST 1 00:13:15.016 #define SPDK_CONFIG_VIRTIO 1 00:13:15.016 #undef SPDK_CONFIG_VTUNE 00:13:15.016 #define SPDK_CONFIG_VTUNE_DIR 00:13:15.016 #define SPDK_CONFIG_WERROR 1 00:13:15.016 #define SPDK_CONFIG_WPDK_DIR 00:13:15.016 #undef SPDK_CONFIG_XNVME 00:13:15.017 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:15.017 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:15.018 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 1744778 ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 1744778 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.YsMoQc 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.YsMoQc/tests/target /tmp/spdk.YsMoQc 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=55693443072 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988528128 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6295085056 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30984232960 00:13:15.019 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375318528 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22388736 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30993997824 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994264064 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=266240 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:15.020 * Looking for test storage... 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=55693443072 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=8509677568 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1668 -- # set -o errtrace 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1672 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1673 -- # true 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1675 -- # xtrace_fd 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lcov --version 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.020 --rc genhtml_branch_coverage=1 00:13:15.020 --rc genhtml_function_coverage=1 00:13:15.020 --rc genhtml_legend=1 00:13:15.020 --rc geninfo_all_blocks=1 00:13:15.020 --rc geninfo_unexecuted_blocks=1 00:13:15.020 00:13:15.020 ' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.020 --rc genhtml_branch_coverage=1 00:13:15.020 --rc genhtml_function_coverage=1 00:13:15.020 --rc genhtml_legend=1 00:13:15.020 --rc geninfo_all_blocks=1 00:13:15.020 --rc geninfo_unexecuted_blocks=1 00:13:15.020 00:13:15.020 ' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.020 --rc genhtml_branch_coverage=1 00:13:15.020 --rc genhtml_function_coverage=1 00:13:15.020 --rc genhtml_legend=1 00:13:15.020 --rc geninfo_all_blocks=1 00:13:15.020 --rc geninfo_unexecuted_blocks=1 00:13:15.020 00:13:15.020 ' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.020 --rc genhtml_branch_coverage=1 00:13:15.020 --rc genhtml_function_coverage=1 00:13:15.020 --rc genhtml_legend=1 00:13:15.020 --rc geninfo_all_blocks=1 00:13:15.020 --rc geninfo_unexecuted_blocks=1 00:13:15.020 00:13:15.020 ' 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:15.020 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.279 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.280 13:23:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:13:17.186 Found 0000:09:00.0 (0x8086 - 0x1592) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:13:17.186 Found 0000:09:00.1 (0x8086 - 0x1592) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:17.186 Found net devices under 0000:09:00.0: cvl_0_0 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ up == up ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:17.186 Found net devices under 0000:09:00.1: cvl_0_1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # is_hw=yes 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:17.186 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:17.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.422 ms 00:13:17.444 00:13:17.444 --- 10.0.0.2 ping statistics --- 00:13:17.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.444 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:13:17.444 00:13:17.444 --- 10.0.0.1 ping statistics --- 00:13:17.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.444 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # return 0 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:17.444 ************************************ 00:13:17.444 START TEST nvmf_filesystem_no_in_capsule 00:13:17.444 ************************************ 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1746341 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1746341 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1746341 ']' 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.444 13:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.444 [2024-10-07 13:23:59.032972] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:17.444 [2024-10-07 13:23:59.033052] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.444 [2024-10-07 13:23:59.092637] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:17.702 [2024-10-07 13:23:59.193885] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.702 [2024-10-07 13:23:59.193945] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.702 [2024-10-07 13:23:59.193972] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:17.702 [2024-10-07 13:23:59.193983] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:17.702 [2024-10-07 13:23:59.193992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.702 [2024-10-07 13:23:59.195382] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:17.702 [2024-10-07 13:23:59.195490] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.702 [2024-10-07 13:23:59.195588] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.702 [2024-10-07 13:23:59.195597] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.702 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.703 [2024-10-07 13:23:59.355195] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.703 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.961 Malloc1 00:13:17.961 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.961 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.962 [2024-10-07 13:23:59.520604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:17.962 { 00:13:17.962 "name": "Malloc1", 00:13:17.962 "aliases": [ 00:13:17.962 "987c1130-2762-4c75-b949-90c0a095e3e0" 00:13:17.962 ], 00:13:17.962 "product_name": "Malloc disk", 00:13:17.962 "block_size": 512, 00:13:17.962 "num_blocks": 1048576, 00:13:17.962 "uuid": "987c1130-2762-4c75-b949-90c0a095e3e0", 00:13:17.962 "assigned_rate_limits": { 00:13:17.962 "rw_ios_per_sec": 0, 00:13:17.962 "rw_mbytes_per_sec": 0, 00:13:17.962 "r_mbytes_per_sec": 0, 00:13:17.962 "w_mbytes_per_sec": 0 00:13:17.962 }, 00:13:17.962 "claimed": true, 00:13:17.962 "claim_type": "exclusive_write", 00:13:17.962 "zoned": false, 00:13:17.962 "supported_io_types": { 00:13:17.962 "read": true, 00:13:17.962 "write": true, 00:13:17.962 "unmap": true, 00:13:17.962 "flush": true, 00:13:17.962 "reset": true, 00:13:17.962 "nvme_admin": false, 00:13:17.962 "nvme_io": false, 00:13:17.962 "nvme_io_md": false, 00:13:17.962 "write_zeroes": true, 00:13:17.962 "zcopy": true, 00:13:17.962 "get_zone_info": false, 00:13:17.962 "zone_management": false, 00:13:17.962 "zone_append": false, 00:13:17.962 "compare": false, 00:13:17.962 "compare_and_write": false, 00:13:17.962 "abort": true, 00:13:17.962 "seek_hole": false, 00:13:17.962 "seek_data": false, 00:13:17.962 "copy": true, 00:13:17.962 "nvme_iov_md": false 00:13:17.962 }, 00:13:17.962 "memory_domains": [ 00:13:17.962 { 00:13:17.962 "dma_device_id": "system", 00:13:17.962 "dma_device_type": 1 00:13:17.962 }, 00:13:17.962 { 00:13:17.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.962 "dma_device_type": 2 00:13:17.962 } 00:13:17.962 ], 00:13:17.962 "driver_specific": {} 00:13:17.962 } 00:13:17.962 ]' 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:17.962 13:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:18.532 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:18.532 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.532 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:18.532 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:18.532 13:24:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:21.062 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:21.063 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:21.063 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:21.063 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:21.063 13:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:21.631 13:24:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:22.568 ************************************ 00:13:22.568 START TEST filesystem_ext4 00:13:22.568 ************************************ 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:22.568 13:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:22.568 mke2fs 1.47.0 (5-Feb-2023) 00:13:22.568 Discarding device blocks: 0/522240 done 00:13:22.826 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:22.826 Filesystem UUID: f1501c4c-52e4-4499-a439-80e68702dd09 00:13:22.826 Superblock backups stored on blocks: 00:13:22.826 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:22.826 00:13:22.826 Allocating group tables: 0/64 done 00:13:22.826 Writing inode tables: 0/64 done 00:13:26.115 Creating journal (8192 blocks): done 00:13:26.373 Writing superblocks and filesystem accounting information: 0/64 done 00:13:26.373 00:13:26.373 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:26.373 13:24:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1746341 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:32.957 00:13:32.957 real 0m9.612s 00:13:32.957 user 0m0.016s 00:13:32.957 sys 0m0.062s 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 ************************************ 00:13:32.957 END TEST filesystem_ext4 00:13:32.957 ************************************ 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 ************************************ 00:13:32.957 START TEST filesystem_btrfs 00:13:32.957 ************************************ 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:32.957 13:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:32.957 btrfs-progs v6.8.1 00:13:32.957 See https://btrfs.readthedocs.io for more information. 00:13:32.957 00:13:32.957 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:32.957 NOTE: several default settings have changed in version 5.15, please make sure 00:13:32.957 this does not affect your deployments: 00:13:32.957 - DUP for metadata (-m dup) 00:13:32.957 - enabled no-holes (-O no-holes) 00:13:32.957 - enabled free-space-tree (-R free-space-tree) 00:13:32.957 00:13:32.957 Label: (null) 00:13:32.957 UUID: 77ae0b71-9bc0-4971-a1ac-2e8827cd3afb 00:13:32.957 Node size: 16384 00:13:32.957 Sector size: 4096 (CPU page size: 4096) 00:13:32.957 Filesystem size: 510.00MiB 00:13:32.957 Block group profiles: 00:13:32.957 Data: single 8.00MiB 00:13:32.957 Metadata: DUP 32.00MiB 00:13:32.957 System: DUP 8.00MiB 00:13:32.957 SSD detected: yes 00:13:32.957 Zoned device: no 00:13:32.957 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:32.957 Checksum: crc32c 00:13:32.957 Number of devices: 1 00:13:32.957 Devices: 00:13:32.957 ID SIZE PATH 00:13:32.957 1 510.00MiB /dev/nvme0n1p1 00:13:32.957 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1746341 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:32.957 00:13:32.957 real 0m0.526s 00:13:32.957 user 0m0.025s 00:13:32.957 sys 0m0.102s 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 ************************************ 00:13:32.957 END TEST filesystem_btrfs 00:13:32.957 ************************************ 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:32.957 ************************************ 00:13:32.957 START TEST filesystem_xfs 00:13:32.957 ************************************ 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:32.957 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:32.958 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:13:32.958 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:32.958 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:32.958 13:24:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:32.958 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:32.958 = sectsz=512 attr=2, projid32bit=1 00:13:32.958 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:32.958 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:32.958 data = bsize=4096 blocks=130560, imaxpct=25 00:13:32.958 = sunit=0 swidth=0 blks 00:13:32.958 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:32.958 log =internal log bsize=4096 blocks=16384, version=2 00:13:32.958 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:32.958 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:33.894 Discarding blocks...Done. 00:13:33.894 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:33.894 13:24:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:36.430 13:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1746341 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:36.430 00:13:36.430 real 0m3.665s 00:13:36.430 user 0m0.016s 00:13:36.430 sys 0m0.065s 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:36.430 ************************************ 00:13:36.430 END TEST filesystem_xfs 00:13:36.430 ************************************ 00:13:36.430 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:36.689 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:36.689 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1746341 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1746341 ']' 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1746341 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1746341 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1746341' 00:13:36.947 killing process with pid 1746341 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 1746341 00:13:36.947 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 1746341 00:13:37.514 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:37.514 00:13:37.514 real 0m19.958s 00:13:37.515 user 1m17.122s 00:13:37.515 sys 0m2.346s 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 ************************************ 00:13:37.515 END TEST nvmf_filesystem_no_in_capsule 00:13:37.515 ************************************ 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 ************************************ 00:13:37.515 START TEST nvmf_filesystem_in_capsule 00:13:37.515 ************************************ 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # nvmfpid=1749486 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # waitforlisten 1749486 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 1749486 ']' 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:37.515 13:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.515 [2024-10-07 13:24:19.047031] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:13:37.515 [2024-10-07 13:24:19.047111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.515 [2024-10-07 13:24:19.105933] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:37.515 [2024-10-07 13:24:19.207783] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.515 [2024-10-07 13:24:19.207847] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.515 [2024-10-07 13:24:19.207874] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.515 [2024-10-07 13:24:19.207885] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.515 [2024-10-07 13:24:19.207893] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.515 [2024-10-07 13:24:19.209327] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.515 [2024-10-07 13:24:19.209444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.515 [2024-10-07 13:24:19.209507] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.515 [2024-10-07 13:24:19.209510] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:37.775 [2024-10-07 13:24:19.368190] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.775 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 Malloc1 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 [2024-10-07 13:24:19.549115] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:13:38.036 { 00:13:38.036 "name": "Malloc1", 00:13:38.036 "aliases": [ 00:13:38.036 "f02d96f9-1b4e-4685-88df-6cf0cb38423f" 00:13:38.036 ], 00:13:38.036 "product_name": "Malloc disk", 00:13:38.036 "block_size": 512, 00:13:38.036 "num_blocks": 1048576, 00:13:38.036 "uuid": "f02d96f9-1b4e-4685-88df-6cf0cb38423f", 00:13:38.036 "assigned_rate_limits": { 00:13:38.036 "rw_ios_per_sec": 0, 00:13:38.036 "rw_mbytes_per_sec": 0, 00:13:38.036 "r_mbytes_per_sec": 0, 00:13:38.036 "w_mbytes_per_sec": 0 00:13:38.036 }, 00:13:38.036 "claimed": true, 00:13:38.036 "claim_type": "exclusive_write", 00:13:38.036 "zoned": false, 00:13:38.036 "supported_io_types": { 00:13:38.036 "read": true, 00:13:38.036 "write": true, 00:13:38.036 "unmap": true, 00:13:38.036 "flush": true, 00:13:38.036 "reset": true, 00:13:38.036 "nvme_admin": false, 00:13:38.036 "nvme_io": false, 00:13:38.036 "nvme_io_md": false, 00:13:38.036 "write_zeroes": true, 00:13:38.036 "zcopy": true, 00:13:38.036 "get_zone_info": false, 00:13:38.036 "zone_management": false, 00:13:38.036 "zone_append": false, 00:13:38.036 "compare": false, 00:13:38.036 "compare_and_write": false, 00:13:38.036 "abort": true, 00:13:38.036 "seek_hole": false, 00:13:38.036 "seek_data": false, 00:13:38.036 "copy": true, 00:13:38.036 "nvme_iov_md": false 00:13:38.036 }, 00:13:38.036 "memory_domains": [ 00:13:38.036 { 00:13:38.036 "dma_device_id": "system", 00:13:38.036 "dma_device_type": 1 00:13:38.036 }, 00:13:38.036 { 00:13:38.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.036 "dma_device_type": 2 00:13:38.036 } 00:13:38.036 ], 00:13:38.036 "driver_specific": {} 00:13:38.036 } 00:13:38.036 ]' 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:38.036 13:24:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.975 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.975 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:13:38.975 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.975 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:38.975 13:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:40.883 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:41.142 13:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:42.082 13:24:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:43.022 ************************************ 00:13:43.022 START TEST filesystem_in_capsule_ext4 00:13:43.022 ************************************ 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:13:43.022 13:24:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:43.022 mke2fs 1.47.0 (5-Feb-2023) 00:13:43.022 Discarding device blocks: 0/522240 done 00:13:43.282 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:43.282 Filesystem UUID: f753820b-a4d4-4fe7-8437-b70bedc55bd5 00:13:43.282 Superblock backups stored on blocks: 00:13:43.282 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:43.282 00:13:43.282 Allocating group tables: 0/64 done 00:13:43.282 Writing inode tables: 0/64 done 00:13:45.189 Creating journal (8192 blocks): done 00:13:45.189 Writing superblocks and filesystem accounting information: 0/64 done 00:13:45.189 00:13:45.189 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:13:45.189 13:24:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1749486 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:51.788 00:13:51.788 real 0m8.113s 00:13:51.788 user 0m0.019s 00:13:51.788 sys 0m0.071s 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:51.788 ************************************ 00:13:51.788 END TEST filesystem_in_capsule_ext4 00:13:51.788 ************************************ 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.788 ************************************ 00:13:51.788 START TEST filesystem_in_capsule_btrfs 00:13:51.788 ************************************ 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:51.788 btrfs-progs v6.8.1 00:13:51.788 See https://btrfs.readthedocs.io for more information. 00:13:51.788 00:13:51.788 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:51.788 NOTE: several default settings have changed in version 5.15, please make sure 00:13:51.788 this does not affect your deployments: 00:13:51.788 - DUP for metadata (-m dup) 00:13:51.788 - enabled no-holes (-O no-holes) 00:13:51.788 - enabled free-space-tree (-R free-space-tree) 00:13:51.788 00:13:51.788 Label: (null) 00:13:51.788 UUID: 84358a3a-ed9a-487b-8857-b8b5c21cdd04 00:13:51.788 Node size: 16384 00:13:51.788 Sector size: 4096 (CPU page size: 4096) 00:13:51.788 Filesystem size: 510.00MiB 00:13:51.788 Block group profiles: 00:13:51.788 Data: single 8.00MiB 00:13:51.788 Metadata: DUP 32.00MiB 00:13:51.788 System: DUP 8.00MiB 00:13:51.788 SSD detected: yes 00:13:51.788 Zoned device: no 00:13:51.788 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:51.788 Checksum: crc32c 00:13:51.788 Number of devices: 1 00:13:51.788 Devices: 00:13:51.788 ID SIZE PATH 00:13:51.788 1 510.00MiB /dev/nvme0n1p1 00:13:51.788 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:13:51.788 13:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1749486 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:51.788 00:13:51.788 real 0m0.460s 00:13:51.788 user 0m0.022s 00:13:51.788 sys 0m0.092s 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:51.788 ************************************ 00:13:51.788 END TEST filesystem_in_capsule_btrfs 00:13:51.788 ************************************ 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:51.788 ************************************ 00:13:51.788 START TEST filesystem_in_capsule_xfs 00:13:51.788 ************************************ 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:13:51.788 13:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:51.788 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:51.788 = sectsz=512 attr=2, projid32bit=1 00:13:51.788 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:51.788 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:51.788 data = bsize=4096 blocks=130560, imaxpct=25 00:13:51.788 = sunit=0 swidth=0 blks 00:13:51.788 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:51.788 log =internal log bsize=4096 blocks=16384, version=2 00:13:51.789 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:51.789 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:52.749 Discarding blocks...Done. 00:13:52.749 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:13:52.749 13:24:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1749486 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:55.284 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:55.285 00:13:55.285 real 0m3.215s 00:13:55.285 user 0m0.010s 00:13:55.285 sys 0m0.066s 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:55.285 ************************************ 00:13:55.285 END TEST filesystem_in_capsule_xfs 00:13:55.285 ************************************ 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1749486 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 1749486 ']' 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 1749486 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1749486 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1749486' 00:13:55.285 killing process with pid 1749486 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 1749486 00:13:55.285 13:24:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 1749486 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:55.856 00:13:55.856 real 0m18.415s 00:13:55.856 user 1m11.108s 00:13:55.856 sys 0m2.234s 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:55.856 ************************************ 00:13:55.856 END TEST nvmf_filesystem_in_capsule 00:13:55.856 ************************************ 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@514 -- # nvmfcleanup 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:55.856 rmmod nvme_tcp 00:13:55.856 rmmod nvme_fabrics 00:13:55.856 rmmod nvme_keyring 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@515 -- # '[' -n '' ']' 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-save 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@789 -- # iptables-restore 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:55.856 13:24:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:58.395 00:13:58.395 real 0m43.214s 00:13:58.395 user 2m29.324s 00:13:58.395 sys 0m6.339s 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.395 ************************************ 00:13:58.395 END TEST nvmf_filesystem 00:13:58.395 ************************************ 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.395 ************************************ 00:13:58.395 START TEST nvmf_target_discovery 00:13:58.395 ************************************ 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:58.395 * Looking for test storage... 00:13:58.395 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:58.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.395 --rc genhtml_branch_coverage=1 00:13:58.395 --rc genhtml_function_coverage=1 00:13:58.395 --rc genhtml_legend=1 00:13:58.395 --rc geninfo_all_blocks=1 00:13:58.395 --rc geninfo_unexecuted_blocks=1 00:13:58.395 00:13:58.395 ' 00:13:58.395 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:58.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.396 --rc genhtml_branch_coverage=1 00:13:58.396 --rc genhtml_function_coverage=1 00:13:58.396 --rc genhtml_legend=1 00:13:58.396 --rc geninfo_all_blocks=1 00:13:58.396 --rc geninfo_unexecuted_blocks=1 00:13:58.396 00:13:58.396 ' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:58.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.396 --rc genhtml_branch_coverage=1 00:13:58.396 --rc genhtml_function_coverage=1 00:13:58.396 --rc genhtml_legend=1 00:13:58.396 --rc geninfo_all_blocks=1 00:13:58.396 --rc geninfo_unexecuted_blocks=1 00:13:58.396 00:13:58.396 ' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:58.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.396 --rc genhtml_branch_coverage=1 00:13:58.396 --rc genhtml_function_coverage=1 00:13:58.396 --rc genhtml_legend=1 00:13:58.396 --rc geninfo_all_blocks=1 00:13:58.396 --rc geninfo_unexecuted_blocks=1 00:13:58.396 00:13:58.396 ' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:58.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # prepare_net_devs 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@436 -- # local -g is_hw=no 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # remove_spdk_ns 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:13:58.396 13:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.300 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:00.301 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:00.301 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:00.301 Found net devices under 0000:09:00.0: cvl_0_0 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:00.301 Found net devices under 0000:09:00.1: cvl_0_1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # is_hw=yes 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:00.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:14:00.301 00:14:00.301 --- 10.0.0.2 ping statistics --- 00:14:00.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.301 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:14:00.301 00:14:00.301 --- 10.0.0.1 ping statistics --- 00:14:00.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.301 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # return 0 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # nvmfpid=1753590 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # waitforlisten 1753590 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 1753590 ']' 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.301 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:00.302 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.302 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:00.302 13:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.560 [2024-10-07 13:24:42.049558] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:00.560 [2024-10-07 13:24:42.049651] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.560 [2024-10-07 13:24:42.109553] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.560 [2024-10-07 13:24:42.213031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.560 [2024-10-07 13:24:42.213088] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.560 [2024-10-07 13:24:42.213112] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.560 [2024-10-07 13:24:42.213123] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.560 [2024-10-07 13:24:42.213133] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.560 [2024-10-07 13:24:42.214735] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.560 [2024-10-07 13:24:42.214795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.560 [2024-10-07 13:24:42.214798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.560 [2024-10-07 13:24:42.214770] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.819 [2024-10-07 13:24:42.377477] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.819 Null1 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.819 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 [2024-10-07 13:24:42.417885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 Null2 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 Null3 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 Null4 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.820 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 4420 00:14:01.081 00:14:01.081 Discovery Log Number of Records 6, Generation counter 6 00:14:01.081 =====Discovery Log Entry 0====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: current discovery subsystem 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4420 00:14:01.081 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: explicit discovery connections, duplicate discovery information 00:14:01.081 sectype: none 00:14:01.081 =====Discovery Log Entry 1====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: nvme subsystem 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4420 00:14:01.081 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: none 00:14:01.081 sectype: none 00:14:01.081 =====Discovery Log Entry 2====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: nvme subsystem 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4420 00:14:01.081 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: none 00:14:01.081 sectype: none 00:14:01.081 =====Discovery Log Entry 3====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: nvme subsystem 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4420 00:14:01.081 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: none 00:14:01.081 sectype: none 00:14:01.081 =====Discovery Log Entry 4====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: nvme subsystem 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4420 00:14:01.081 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: none 00:14:01.081 sectype: none 00:14:01.081 =====Discovery Log Entry 5====== 00:14:01.081 trtype: tcp 00:14:01.081 adrfam: ipv4 00:14:01.081 subtype: discovery subsystem referral 00:14:01.081 treq: not required 00:14:01.081 portid: 0 00:14:01.081 trsvcid: 4430 00:14:01.081 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:01.081 traddr: 10.0.0.2 00:14:01.081 eflags: none 00:14:01.081 sectype: none 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:01.081 Perform nvmf subsystem discovery via RPC 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 [ 00:14:01.081 { 00:14:01.081 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.081 "subtype": "Discovery", 00:14:01.081 "listen_addresses": [ 00:14:01.081 { 00:14:01.081 "trtype": "TCP", 00:14:01.081 "adrfam": "IPv4", 00:14:01.081 "traddr": "10.0.0.2", 00:14:01.081 "trsvcid": "4420" 00:14:01.081 } 00:14:01.081 ], 00:14:01.081 "allow_any_host": true, 00:14:01.081 "hosts": [] 00:14:01.081 }, 00:14:01.081 { 00:14:01.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.081 "subtype": "NVMe", 00:14:01.081 "listen_addresses": [ 00:14:01.081 { 00:14:01.081 "trtype": "TCP", 00:14:01.081 "adrfam": "IPv4", 00:14:01.081 "traddr": "10.0.0.2", 00:14:01.081 "trsvcid": "4420" 00:14:01.081 } 00:14:01.081 ], 00:14:01.081 "allow_any_host": true, 00:14:01.081 "hosts": [], 00:14:01.081 "serial_number": "SPDK00000000000001", 00:14:01.081 "model_number": "SPDK bdev Controller", 00:14:01.081 "max_namespaces": 32, 00:14:01.081 "min_cntlid": 1, 00:14:01.081 "max_cntlid": 65519, 00:14:01.081 "namespaces": [ 00:14:01.081 { 00:14:01.081 "nsid": 1, 00:14:01.081 "bdev_name": "Null1", 00:14:01.081 "name": "Null1", 00:14:01.081 "nguid": "2CB7A2F3FB0443A0B47841FE0F33ED37", 00:14:01.081 "uuid": "2cb7a2f3-fb04-43a0-b478-41fe0f33ed37" 00:14:01.081 } 00:14:01.081 ] 00:14:01.081 }, 00:14:01.081 { 00:14:01.081 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:01.081 "subtype": "NVMe", 00:14:01.081 "listen_addresses": [ 00:14:01.081 { 00:14:01.081 "trtype": "TCP", 00:14:01.081 "adrfam": "IPv4", 00:14:01.081 "traddr": "10.0.0.2", 00:14:01.081 "trsvcid": "4420" 00:14:01.081 } 00:14:01.081 ], 00:14:01.081 "allow_any_host": true, 00:14:01.081 "hosts": [], 00:14:01.081 "serial_number": "SPDK00000000000002", 00:14:01.081 "model_number": "SPDK bdev Controller", 00:14:01.081 "max_namespaces": 32, 00:14:01.081 "min_cntlid": 1, 00:14:01.081 "max_cntlid": 65519, 00:14:01.081 "namespaces": [ 00:14:01.081 { 00:14:01.081 "nsid": 1, 00:14:01.081 "bdev_name": "Null2", 00:14:01.081 "name": "Null2", 00:14:01.081 "nguid": "FE24CFBBD9774293980D4E542C590225", 00:14:01.081 "uuid": "fe24cfbb-d977-4293-980d-4e542c590225" 00:14:01.081 } 00:14:01.081 ] 00:14:01.081 }, 00:14:01.081 { 00:14:01.081 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:01.081 "subtype": "NVMe", 00:14:01.081 "listen_addresses": [ 00:14:01.081 { 00:14:01.081 "trtype": "TCP", 00:14:01.081 "adrfam": "IPv4", 00:14:01.081 "traddr": "10.0.0.2", 00:14:01.081 "trsvcid": "4420" 00:14:01.081 } 00:14:01.081 ], 00:14:01.081 "allow_any_host": true, 00:14:01.081 "hosts": [], 00:14:01.081 "serial_number": "SPDK00000000000003", 00:14:01.081 "model_number": "SPDK bdev Controller", 00:14:01.081 "max_namespaces": 32, 00:14:01.081 "min_cntlid": 1, 00:14:01.081 "max_cntlid": 65519, 00:14:01.081 "namespaces": [ 00:14:01.081 { 00:14:01.081 "nsid": 1, 00:14:01.081 "bdev_name": "Null3", 00:14:01.081 "name": "Null3", 00:14:01.081 "nguid": "7E2940F3BF174F368E20D1BED3946BCB", 00:14:01.081 "uuid": "7e2940f3-bf17-4f36-8e20-d1bed3946bcb" 00:14:01.081 } 00:14:01.081 ] 00:14:01.081 }, 00:14:01.081 { 00:14:01.081 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:01.081 "subtype": "NVMe", 00:14:01.081 "listen_addresses": [ 00:14:01.081 { 00:14:01.081 "trtype": "TCP", 00:14:01.081 "adrfam": "IPv4", 00:14:01.081 "traddr": "10.0.0.2", 00:14:01.081 "trsvcid": "4420" 00:14:01.081 } 00:14:01.081 ], 00:14:01.081 "allow_any_host": true, 00:14:01.081 "hosts": [], 00:14:01.081 "serial_number": "SPDK00000000000004", 00:14:01.081 "model_number": "SPDK bdev Controller", 00:14:01.081 "max_namespaces": 32, 00:14:01.081 "min_cntlid": 1, 00:14:01.081 "max_cntlid": 65519, 00:14:01.081 "namespaces": [ 00:14:01.081 { 00:14:01.081 "nsid": 1, 00:14:01.081 "bdev_name": "Null4", 00:14:01.081 "name": "Null4", 00:14:01.081 "nguid": "633767FB445B4E4FB0A91EE6D5E09AC4", 00:14:01.081 "uuid": "633767fb-445b-4e4f-b0a9-1ee6d5e09ac4" 00:14:01.081 } 00:14:01.081 ] 00:14:01.081 } 00:14:01.081 ] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.081 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.082 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.343 rmmod nvme_tcp 00:14:01.343 rmmod nvme_fabrics 00:14:01.343 rmmod nvme_keyring 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@515 -- # '[' -n 1753590 ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # killprocess 1753590 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 1753590 ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 1753590 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1753590 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1753590' 00:14:01.343 killing process with pid 1753590 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 1753590 00:14:01.343 13:24:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 1753590 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-save 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@789 -- # iptables-restore 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.604 13:24:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:04.165 00:14:04.165 real 0m5.698s 00:14:04.165 user 0m4.862s 00:14:04.165 sys 0m1.913s 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:04.165 ************************************ 00:14:04.165 END TEST nvmf_target_discovery 00:14:04.165 ************************************ 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.165 ************************************ 00:14:04.165 START TEST nvmf_referrals 00:14:04.165 ************************************ 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:04.165 * Looking for test storage... 00:14:04.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lcov --version 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.165 --rc genhtml_branch_coverage=1 00:14:04.165 --rc genhtml_function_coverage=1 00:14:04.165 --rc genhtml_legend=1 00:14:04.165 --rc geninfo_all_blocks=1 00:14:04.165 --rc geninfo_unexecuted_blocks=1 00:14:04.165 00:14:04.165 ' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.165 --rc genhtml_branch_coverage=1 00:14:04.165 --rc genhtml_function_coverage=1 00:14:04.165 --rc genhtml_legend=1 00:14:04.165 --rc geninfo_all_blocks=1 00:14:04.165 --rc geninfo_unexecuted_blocks=1 00:14:04.165 00:14:04.165 ' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.165 --rc genhtml_branch_coverage=1 00:14:04.165 --rc genhtml_function_coverage=1 00:14:04.165 --rc genhtml_legend=1 00:14:04.165 --rc geninfo_all_blocks=1 00:14:04.165 --rc geninfo_unexecuted_blocks=1 00:14:04.165 00:14:04.165 ' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:04.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.165 --rc genhtml_branch_coverage=1 00:14:04.165 --rc genhtml_function_coverage=1 00:14:04.165 --rc genhtml_legend=1 00:14:04.165 --rc geninfo_all_blocks=1 00:14:04.165 --rc geninfo_unexecuted_blocks=1 00:14:04.165 00:14:04.165 ' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.165 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.166 13:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:06.068 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:06.068 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:06.068 Found net devices under 0000:09:00.0: cvl_0_0 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:06.068 Found net devices under 0000:09:00.1: cvl_0_1 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # is_hw=yes 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.068 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:14:06.069 00:14:06.069 --- 10.0.0.2 ping statistics --- 00:14:06.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.069 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:14:06.069 00:14:06.069 --- 10.0.0.1 ping statistics --- 00:14:06.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.069 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # return 0 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # nvmfpid=1755593 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # waitforlisten 1755593 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 1755593 ']' 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:06.069 13:24:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.328 [2024-10-07 13:24:47.825317] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:06.328 [2024-10-07 13:24:47.825386] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.328 [2024-10-07 13:24:47.885097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.329 [2024-10-07 13:24:47.998962] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.329 [2024-10-07 13:24:47.999028] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.329 [2024-10-07 13:24:47.999051] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.329 [2024-10-07 13:24:47.999063] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.329 [2024-10-07 13:24:47.999073] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.329 [2024-10-07 13:24:48.000697] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.329 [2024-10-07 13:24:48.000732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.329 [2024-10-07 13:24:48.000765] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.329 [2024-10-07 13:24:48.000769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 [2024-10-07 13:24:48.161169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 [2024-10-07 13:24:48.173469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.588 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:06.845 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.102 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.360 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:07.360 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:07.360 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:07.360 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:07.361 13:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:07.361 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:07.361 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:07.361 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:07.361 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.361 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:07.620 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:07.620 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:07.620 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:07.620 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:07.620 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.621 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:07.880 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:08.138 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:08.139 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:08.139 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:08.139 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:08.139 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:08.139 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:08.397 13:24:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:08.397 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:08.397 rmmod nvme_tcp 00:14:08.397 rmmod nvme_fabrics 00:14:08.654 rmmod nvme_keyring 00:14:08.654 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:08.654 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:08.654 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@515 -- # '[' -n 1755593 ']' 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # killprocess 1755593 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 1755593 ']' 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 1755593 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1755593 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1755593' 00:14:08.655 killing process with pid 1755593 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 1755593 00:14:08.655 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 1755593 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-save 00:14:08.912 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@789 -- # iptables-restore 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:08.913 13:24:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.819 00:14:10.819 real 0m7.139s 00:14:10.819 user 0m11.055s 00:14:10.819 sys 0m2.384s 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:10.819 ************************************ 00:14:10.819 END TEST nvmf_referrals 00:14:10.819 ************************************ 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.819 ************************************ 00:14:10.819 START TEST nvmf_connect_disconnect 00:14:10.819 ************************************ 00:14:10.819 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:11.078 * Looking for test storage... 00:14:11.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lcov --version 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.078 --rc genhtml_branch_coverage=1 00:14:11.078 --rc genhtml_function_coverage=1 00:14:11.078 --rc genhtml_legend=1 00:14:11.078 --rc geninfo_all_blocks=1 00:14:11.078 --rc geninfo_unexecuted_blocks=1 00:14:11.078 00:14:11.078 ' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.078 --rc genhtml_branch_coverage=1 00:14:11.078 --rc genhtml_function_coverage=1 00:14:11.078 --rc genhtml_legend=1 00:14:11.078 --rc geninfo_all_blocks=1 00:14:11.078 --rc geninfo_unexecuted_blocks=1 00:14:11.078 00:14:11.078 ' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.078 --rc genhtml_branch_coverage=1 00:14:11.078 --rc genhtml_function_coverage=1 00:14:11.078 --rc genhtml_legend=1 00:14:11.078 --rc geninfo_all_blocks=1 00:14:11.078 --rc geninfo_unexecuted_blocks=1 00:14:11.078 00:14:11.078 ' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:11.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.078 --rc genhtml_branch_coverage=1 00:14:11.078 --rc genhtml_function_coverage=1 00:14:11.078 --rc genhtml_legend=1 00:14:11.078 --rc geninfo_all_blocks=1 00:14:11.078 --rc geninfo_unexecuted_blocks=1 00:14:11.078 00:14:11.078 ' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.078 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.079 13:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:13.614 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:13.615 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:13.615 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:13.615 Found net devices under 0000:09:00.0: cvl_0_0 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:13.615 Found net devices under 0000:09:00.1: cvl_0_1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # is_hw=yes 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:13.615 13:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:13.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:13.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:14:13.615 00:14:13.615 --- 10.0.0.2 ping statistics --- 00:14:13.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.615 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:13.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:13.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:14:13.616 00:14:13.616 --- 10.0.0.1 ping statistics --- 00:14:13.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:13.616 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # return 0 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # nvmfpid=1757895 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # waitforlisten 1757895 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 1757895 ']' 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:13.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:13.616 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.616 [2024-10-07 13:24:55.086515] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:13.616 [2024-10-07 13:24:55.086601] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:13.616 [2024-10-07 13:24:55.150385] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:13.616 [2024-10-07 13:24:55.258521] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:13.616 [2024-10-07 13:24:55.258587] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:13.616 [2024-10-07 13:24:55.258600] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:13.616 [2024-10-07 13:24:55.258611] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:13.616 [2024-10-07 13:24:55.258620] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:13.616 [2024-10-07 13:24:55.260324] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:13.616 [2024-10-07 13:24:55.260358] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:13.616 [2024-10-07 13:24:55.260415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.616 [2024-10-07 13:24:55.260418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 [2024-10-07 13:24:55.428227] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:13.875 [2024-10-07 13:24:55.489685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:13.875 13:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:17.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:28.064 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.065 rmmod nvme_tcp 00:14:28.065 rmmod nvme_fabrics 00:14:28.065 rmmod nvme_keyring 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@515 -- # '[' -n 1757895 ']' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # killprocess 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 1757895 ']' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1757895' 00:14:28.065 killing process with pid 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 1757895 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-save 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@789 -- # iptables-restore 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.065 13:25:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.619 00:14:30.619 real 0m19.268s 00:14:30.619 user 0m57.445s 00:14:30.619 sys 0m3.482s 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:30.619 ************************************ 00:14:30.619 END TEST nvmf_connect_disconnect 00:14:30.619 ************************************ 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:30.619 ************************************ 00:14:30.619 START TEST nvmf_multitarget 00:14:30.619 ************************************ 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:30.619 * Looking for test storage... 00:14:30.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lcov --version 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.619 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:30.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.620 --rc genhtml_branch_coverage=1 00:14:30.620 --rc genhtml_function_coverage=1 00:14:30.620 --rc genhtml_legend=1 00:14:30.620 --rc geninfo_all_blocks=1 00:14:30.620 --rc geninfo_unexecuted_blocks=1 00:14:30.620 00:14:30.620 ' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:30.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.620 --rc genhtml_branch_coverage=1 00:14:30.620 --rc genhtml_function_coverage=1 00:14:30.620 --rc genhtml_legend=1 00:14:30.620 --rc geninfo_all_blocks=1 00:14:30.620 --rc geninfo_unexecuted_blocks=1 00:14:30.620 00:14:30.620 ' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:30.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.620 --rc genhtml_branch_coverage=1 00:14:30.620 --rc genhtml_function_coverage=1 00:14:30.620 --rc genhtml_legend=1 00:14:30.620 --rc geninfo_all_blocks=1 00:14:30.620 --rc geninfo_unexecuted_blocks=1 00:14:30.620 00:14:30.620 ' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:30.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.620 --rc genhtml_branch_coverage=1 00:14:30.620 --rc genhtml_function_coverage=1 00:14:30.620 --rc genhtml_legend=1 00:14:30.620 --rc geninfo_all_blocks=1 00:14:30.620 --rc geninfo_unexecuted_blocks=1 00:14:30.620 00:14:30.620 ' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.620 13:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:30.620 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:30.620 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:30.621 13:25:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:32.553 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:32.553 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:32.553 Found net devices under 0000:09:00.0: cvl_0_0 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:32.553 Found net devices under 0000:09:00.1: cvl_0_1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # is_hw=yes 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:32.553 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:32.554 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.554 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:14:32.554 00:14:32.554 --- 10.0.0.2 ping statistics --- 00:14:32.554 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.554 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:32.554 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:32.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:14:32.834 00:14:32.834 --- 10.0.0.1 ping statistics --- 00:14:32.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.834 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:14:32.834 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.834 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # return 0 00:14:32.834 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # nvmfpid=1761490 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # waitforlisten 1761490 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 1761490 ']' 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:32.835 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:32.835 [2024-10-07 13:25:14.321551] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:32.835 [2024-10-07 13:25:14.321646] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.835 [2024-10-07 13:25:14.384464] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.835 [2024-10-07 13:25:14.490951] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.835 [2024-10-07 13:25:14.491013] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.835 [2024-10-07 13:25:14.491043] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.835 [2024-10-07 13:25:14.491054] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.835 [2024-10-07 13:25:14.491063] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.835 [2024-10-07 13:25:14.492607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.835 [2024-10-07 13:25:14.492675] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.835 [2024-10-07 13:25:14.492738] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.835 [2024-10-07 13:25:14.492742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.099 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:33.099 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:14:33.099 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:33.099 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:33.100 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:33.357 "nvmf_tgt_1" 00:14:33.357 13:25:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:33.357 "nvmf_tgt_2" 00:14:33.357 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:33.357 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:33.614 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:33.614 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:33.614 true 00:14:33.614 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:33.872 true 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:33.872 rmmod nvme_tcp 00:14:33.872 rmmod nvme_fabrics 00:14:33.872 rmmod nvme_keyring 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@515 -- # '[' -n 1761490 ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # killprocess 1761490 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 1761490 ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 1761490 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1761490 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1761490' 00:14:33.872 killing process with pid 1761490 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 1761490 00:14:33.872 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 1761490 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-save 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@789 -- # iptables-restore 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.130 13:25:15 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.670 00:14:36.670 real 0m6.025s 00:14:36.670 user 0m6.818s 00:14:36.670 sys 0m2.064s 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:36.670 ************************************ 00:14:36.670 END TEST nvmf_multitarget 00:14:36.670 ************************************ 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.670 ************************************ 00:14:36.670 START TEST nvmf_rpc 00:14:36.670 ************************************ 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:36.670 * Looking for test storage... 00:14:36.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:14:36.670 13:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.670 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:36.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.671 --rc genhtml_branch_coverage=1 00:14:36.671 --rc genhtml_function_coverage=1 00:14:36.671 --rc genhtml_legend=1 00:14:36.671 --rc geninfo_all_blocks=1 00:14:36.671 --rc geninfo_unexecuted_blocks=1 00:14:36.671 00:14:36.671 ' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:36.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.671 --rc genhtml_branch_coverage=1 00:14:36.671 --rc genhtml_function_coverage=1 00:14:36.671 --rc genhtml_legend=1 00:14:36.671 --rc geninfo_all_blocks=1 00:14:36.671 --rc geninfo_unexecuted_blocks=1 00:14:36.671 00:14:36.671 ' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:36.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.671 --rc genhtml_branch_coverage=1 00:14:36.671 --rc genhtml_function_coverage=1 00:14:36.671 --rc genhtml_legend=1 00:14:36.671 --rc geninfo_all_blocks=1 00:14:36.671 --rc geninfo_unexecuted_blocks=1 00:14:36.671 00:14:36.671 ' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:36.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.671 --rc genhtml_branch_coverage=1 00:14:36.671 --rc genhtml_function_coverage=1 00:14:36.671 --rc genhtml_legend=1 00:14:36.671 --rc geninfo_all_blocks=1 00:14:36.671 --rc geninfo_unexecuted_blocks=1 00:14:36.671 00:14:36.671 ' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # prepare_net_devs 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@436 -- # local -g is_hw=no 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # remove_spdk_ns 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.671 13:25:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:14:38.573 Found 0000:09:00.0 (0x8086 - 0x1592) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:14:38.573 Found 0000:09:00.1 (0x8086 - 0x1592) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:38.573 Found net devices under 0000:09:00.0: cvl_0_0 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ up == up ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:38.573 Found net devices under 0000:09:00.1: cvl_0_1 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # is_hw=yes 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.573 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.574 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:14:38.852 00:14:38.852 --- 10.0.0.2 ping statistics --- 00:14:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.852 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:14:38.852 00:14:38.852 --- 10.0.0.1 ping statistics --- 00:14:38.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.852 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # return 0 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # nvmfpid=1763495 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # waitforlisten 1763495 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 1763495 ']' 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:38.852 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.852 [2024-10-07 13:25:20.379108] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:14:38.852 [2024-10-07 13:25:20.379190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.852 [2024-10-07 13:25:20.440094] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.852 [2024-10-07 13:25:20.547216] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.852 [2024-10-07 13:25:20.547268] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.852 [2024-10-07 13:25:20.547296] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.852 [2024-10-07 13:25:20.547306] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.852 [2024-10-07 13:25:20.547316] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.852 [2024-10-07 13:25:20.548879] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.852 [2024-10-07 13:25:20.548931] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.852 [2024-10-07 13:25:20.549010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.852 [2024-10-07 13:25:20.549014] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:39.111 "tick_rate": 2700000000, 00:14:39.111 "poll_groups": [ 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_000", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_001", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_002", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_003", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [] 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 }' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 [2024-10-07 13:25:20.795767] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:39.111 "tick_rate": 2700000000, 00:14:39.111 "poll_groups": [ 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_000", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [ 00:14:39.111 { 00:14:39.111 "trtype": "TCP" 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_001", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [ 00:14:39.111 { 00:14:39.111 "trtype": "TCP" 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_002", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [ 00:14:39.111 { 00:14:39.111 "trtype": "TCP" 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 }, 00:14:39.111 { 00:14:39.111 "name": "nvmf_tgt_poll_group_003", 00:14:39.111 "admin_qpairs": 0, 00:14:39.111 "io_qpairs": 0, 00:14:39.111 "current_admin_qpairs": 0, 00:14:39.111 "current_io_qpairs": 0, 00:14:39.111 "pending_bdev_io": 0, 00:14:39.111 "completed_nvme_io": 0, 00:14:39.111 "transports": [ 00:14:39.111 { 00:14:39.111 "trtype": "TCP" 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 } 00:14:39.111 ] 00:14:39.111 }' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:39.111 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:39.112 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:39.112 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 Malloc1 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 [2024-10-07 13:25:20.945015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:39.372 13:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -a 10.0.0.2 -s 4420 00:14:39.372 [2024-10-07 13:25:20.967586] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4' 00:14:39.372 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:39.372 could not add new controller: failed to write to nvme-fabrics device 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.372 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.311 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:40.311 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:40.311 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.311 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:40.311 13:25:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.217 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:42.217 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.218 [2024-10-07 13:25:23.806622] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4' 00:14:42.218 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:42.218 could not add new controller: failed to write to nvme-fabrics device 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.218 13:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.784 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.784 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:42.784 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.784 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:42.784 13:25:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.323 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 [2024-10-07 13:25:26.592227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.323 13:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.583 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.583 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:45.583 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.583 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:45.583 13:25:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.111 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.111 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 [2024-10-07 13:25:29.381818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.112 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.371 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.371 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:48.371 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.371 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:48.371 13:25:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:50.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.902 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 [2024-10-07 13:25:32.155033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.903 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.162 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.162 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:51.162 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.162 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:51.162 13:25:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.699 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.700 [2024-10-07 13:25:34.983904] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.700 13:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:53.700 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.700 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:53.958 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:53.958 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.958 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:53.958 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:53.958 13:25:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 [2024-10-07 13:25:37.795063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.492 13:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:56.750 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:56.751 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:14:56.751 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:56.751 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:56.751 13:25:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 [2024-10-07 13:25:40.582751] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.293 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 [2024-10-07 13:25:40.630759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 [2024-10-07 13:25:40.678927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 [2024-10-07 13:25:40.727113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 [2024-10-07 13:25:40.775257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.294 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:14:59.294 "tick_rate": 2700000000, 00:14:59.294 "poll_groups": [ 00:14:59.294 { 00:14:59.294 "name": "nvmf_tgt_poll_group_000", 00:14:59.294 "admin_qpairs": 2, 00:14:59.294 "io_qpairs": 84, 00:14:59.294 "current_admin_qpairs": 0, 00:14:59.294 "current_io_qpairs": 0, 00:14:59.294 "pending_bdev_io": 0, 00:14:59.294 "completed_nvme_io": 86, 00:14:59.294 "transports": [ 00:14:59.294 { 00:14:59.294 "trtype": "TCP" 00:14:59.294 } 00:14:59.294 ] 00:14:59.294 }, 00:14:59.294 { 00:14:59.294 "name": "nvmf_tgt_poll_group_001", 00:14:59.294 "admin_qpairs": 2, 00:14:59.294 "io_qpairs": 84, 00:14:59.294 "current_admin_qpairs": 0, 00:14:59.294 "current_io_qpairs": 0, 00:14:59.294 "pending_bdev_io": 0, 00:14:59.294 "completed_nvme_io": 207, 00:14:59.294 "transports": [ 00:14:59.294 { 00:14:59.294 "trtype": "TCP" 00:14:59.294 } 00:14:59.294 ] 00:14:59.294 }, 00:14:59.294 { 00:14:59.294 "name": "nvmf_tgt_poll_group_002", 00:14:59.294 "admin_qpairs": 1, 00:14:59.295 "io_qpairs": 84, 00:14:59.295 "current_admin_qpairs": 0, 00:14:59.295 "current_io_qpairs": 0, 00:14:59.295 "pending_bdev_io": 0, 00:14:59.295 "completed_nvme_io": 209, 00:14:59.295 "transports": [ 00:14:59.295 { 00:14:59.295 "trtype": "TCP" 00:14:59.295 } 00:14:59.295 ] 00:14:59.295 }, 00:14:59.295 { 00:14:59.295 "name": "nvmf_tgt_poll_group_003", 00:14:59.295 "admin_qpairs": 2, 00:14:59.295 "io_qpairs": 84, 00:14:59.295 "current_admin_qpairs": 0, 00:14:59.295 "current_io_qpairs": 0, 00:14:59.295 "pending_bdev_io": 0, 00:14:59.295 "completed_nvme_io": 184, 00:14:59.295 "transports": [ 00:14:59.295 { 00:14:59.295 "trtype": "TCP" 00:14:59.295 } 00:14:59.295 ] 00:14:59.295 } 00:14:59.295 ] 00:14:59.295 }' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@514 -- # nvmfcleanup 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:59.295 rmmod nvme_tcp 00:14:59.295 rmmod nvme_fabrics 00:14:59.295 rmmod nvme_keyring 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@515 -- # '[' -n 1763495 ']' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # killprocess 1763495 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 1763495 ']' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 1763495 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.295 13:25:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1763495 00:14:59.555 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.556 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.556 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1763495' 00:14:59.556 killing process with pid 1763495 00:14:59.556 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 1763495 00:14:59.556 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 1763495 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-save 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@789 -- # iptables-restore 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:59.817 13:25:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:01.728 00:15:01.728 real 0m25.425s 00:15:01.728 user 1m22.228s 00:15:01.728 sys 0m4.256s 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 ************************************ 00:15:01.728 END TEST nvmf_rpc 00:15:01.728 ************************************ 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:01.728 ************************************ 00:15:01.728 START TEST nvmf_invalid 00:15:01.728 ************************************ 00:15:01.728 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:01.988 * Looking for test storage... 00:15:01.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lcov --version 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.988 --rc genhtml_branch_coverage=1 00:15:01.988 --rc genhtml_function_coverage=1 00:15:01.988 --rc genhtml_legend=1 00:15:01.988 --rc geninfo_all_blocks=1 00:15:01.988 --rc geninfo_unexecuted_blocks=1 00:15:01.988 00:15:01.988 ' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.988 --rc genhtml_branch_coverage=1 00:15:01.988 --rc genhtml_function_coverage=1 00:15:01.988 --rc genhtml_legend=1 00:15:01.988 --rc geninfo_all_blocks=1 00:15:01.988 --rc geninfo_unexecuted_blocks=1 00:15:01.988 00:15:01.988 ' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.988 --rc genhtml_branch_coverage=1 00:15:01.988 --rc genhtml_function_coverage=1 00:15:01.988 --rc genhtml_legend=1 00:15:01.988 --rc geninfo_all_blocks=1 00:15:01.988 --rc geninfo_unexecuted_blocks=1 00:15:01.988 00:15:01.988 ' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:01.988 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:01.988 --rc genhtml_branch_coverage=1 00:15:01.988 --rc genhtml_function_coverage=1 00:15:01.988 --rc genhtml_legend=1 00:15:01.988 --rc geninfo_all_blocks=1 00:15:01.988 --rc geninfo_unexecuted_blocks=1 00:15:01.988 00:15:01.988 ' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:01.988 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:01.989 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:01.989 13:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:15:03.893 Found 0000:09:00.0 (0x8086 - 0x1592) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:15:03.893 Found 0000:09:00.1 (0x8086 - 0x1592) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:03.893 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:03.894 Found net devices under 0000:09:00.0: cvl_0_0 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:03.894 Found net devices under 0000:09:00.1: cvl_0_1 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # is_hw=yes 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:03.894 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:04.152 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.152 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:15:04.152 00:15:04.152 --- 10.0.0.2 ping statistics --- 00:15:04.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.152 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:04.152 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.152 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:15:04.152 00:15:04.152 --- 10.0.0.1 ping statistics --- 00:15:04.152 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.152 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # return 0 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # nvmfpid=1767874 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # waitforlisten 1767874 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 1767874 ']' 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:04.152 13:25:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:04.152 [2024-10-07 13:25:45.785853] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:04.152 [2024-10-07 13:25:45.785931] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.152 [2024-10-07 13:25:45.845488] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:04.411 [2024-10-07 13:25:45.953646] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.411 [2024-10-07 13:25:45.953734] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.411 [2024-10-07 13:25:45.953770] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:04.411 [2024-10-07 13:25:45.953782] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:04.411 [2024-10-07 13:25:45.953791] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.411 [2024-10-07 13:25:45.955353] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:04.411 [2024-10-07 13:25:45.955418] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:04.411 [2024-10-07 13:25:45.955526] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:04.411 [2024-10-07 13:25:45.955530] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:04.411 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode926 00:15:04.976 [2024-10-07 13:25:46.413204] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:04.976 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:04.976 { 00:15:04.976 "nqn": "nqn.2016-06.io.spdk:cnode926", 00:15:04.976 "tgt_name": "foobar", 00:15:04.976 "method": "nvmf_create_subsystem", 00:15:04.976 "req_id": 1 00:15:04.976 } 00:15:04.976 Got JSON-RPC error response 00:15:04.976 response: 00:15:04.976 { 00:15:04.976 "code": -32603, 00:15:04.976 "message": "Unable to find target foobar" 00:15:04.976 }' 00:15:04.976 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:04.976 { 00:15:04.976 "nqn": "nqn.2016-06.io.spdk:cnode926", 00:15:04.976 "tgt_name": "foobar", 00:15:04.976 "method": "nvmf_create_subsystem", 00:15:04.976 "req_id": 1 00:15:04.976 } 00:15:04.976 Got JSON-RPC error response 00:15:04.976 response: 00:15:04.976 { 00:15:04.976 "code": -32603, 00:15:04.976 "message": "Unable to find target foobar" 00:15:04.976 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:04.976 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:04.976 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13525 00:15:05.234 [2024-10-07 13:25:46.690162] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13525: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:05.234 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:05.234 { 00:15:05.235 "nqn": "nqn.2016-06.io.spdk:cnode13525", 00:15:05.235 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:05.235 "method": "nvmf_create_subsystem", 00:15:05.235 "req_id": 1 00:15:05.235 } 00:15:05.235 Got JSON-RPC error response 00:15:05.235 response: 00:15:05.235 { 00:15:05.235 "code": -32602, 00:15:05.235 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:05.235 }' 00:15:05.235 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:05.235 { 00:15:05.235 "nqn": "nqn.2016-06.io.spdk:cnode13525", 00:15:05.235 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:05.235 "method": "nvmf_create_subsystem", 00:15:05.235 "req_id": 1 00:15:05.235 } 00:15:05.235 Got JSON-RPC error response 00:15:05.235 response: 00:15:05.235 { 00:15:05.235 "code": -32602, 00:15:05.235 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:05.235 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:05.235 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:05.235 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20408 00:15:05.493 [2024-10-07 13:25:46.975124] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20408: invalid model number 'SPDK_Controller' 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:05.493 { 00:15:05.493 "nqn": "nqn.2016-06.io.spdk:cnode20408", 00:15:05.493 "model_number": "SPDK_Controller\u001f", 00:15:05.493 "method": "nvmf_create_subsystem", 00:15:05.493 "req_id": 1 00:15:05.493 } 00:15:05.493 Got JSON-RPC error response 00:15:05.493 response: 00:15:05.493 { 00:15:05.493 "code": -32602, 00:15:05.493 "message": "Invalid MN SPDK_Controller\u001f" 00:15:05.493 }' 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:05.493 { 00:15:05.493 "nqn": "nqn.2016-06.io.spdk:cnode20408", 00:15:05.493 "model_number": "SPDK_Controller\u001f", 00:15:05.493 "method": "nvmf_create_subsystem", 00:15:05.493 "req_id": 1 00:15:05.493 } 00:15:05.493 Got JSON-RPC error response 00:15:05.493 response: 00:15:05.493 { 00:15:05.493 "code": -32602, 00:15:05.493 "message": "Invalid MN SPDK_Controller\u001f" 00:15:05.493 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:05.493 13:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.493 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ W == \- ]] 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'W|B\pzM.d9Z#tCL2inU9' 00:15:05.494 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'W|B\pzM.d9Z#tCL2inU9' nqn.2016-06.io.spdk:cnode31967 00:15:05.754 [2024-10-07 13:25:47.312197] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31967: invalid serial number 'W|B\pzM.d9Z#tCL2inU9' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:05.754 { 00:15:05.754 "nqn": "nqn.2016-06.io.spdk:cnode31967", 00:15:05.754 "serial_number": "W|B\\\u007fpzM.d9Z#tCL2inU9", 00:15:05.754 "method": "nvmf_create_subsystem", 00:15:05.754 "req_id": 1 00:15:05.754 } 00:15:05.754 Got JSON-RPC error response 00:15:05.754 response: 00:15:05.754 { 00:15:05.754 "code": -32602, 00:15:05.754 "message": "Invalid SN W|B\\\u007fpzM.d9Z#tCL2inU9" 00:15:05.754 }' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:05.754 { 00:15:05.754 "nqn": "nqn.2016-06.io.spdk:cnode31967", 00:15:05.754 "serial_number": "W|B\\\u007fpzM.d9Z#tCL2inU9", 00:15:05.754 "method": "nvmf_create_subsystem", 00:15:05.754 "req_id": 1 00:15:05.754 } 00:15:05.754 Got JSON-RPC error response 00:15:05.754 response: 00:15:05.754 { 00:15:05.754 "code": -32602, 00:15:05.754 "message": "Invalid SN W|B\\\u007fpzM.d9Z#tCL2inU9" 00:15:05.754 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:05.754 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:05.755 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ g == \- ]] 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'g}Y/tn7sG?zO)Ir{\0!VO~PJ-Y?uwk@c:RV,Rn"CT' 00:15:06.014 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'g}Y/tn7sG?zO)Ir{\0!VO~PJ-Y?uwk@c:RV,Rn"CT' nqn.2016-06.io.spdk:cnode11760 00:15:06.272 [2024-10-07 13:25:47.737591] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11760: invalid model number 'g}Y/tn7sG?zO)Ir{\0!VO~PJ-Y?uwk@c:RV,Rn"CT' 00:15:06.272 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:15:06.272 { 00:15:06.272 "nqn": "nqn.2016-06.io.spdk:cnode11760", 00:15:06.272 "model_number": "g}Y/tn7sG?zO)Ir{\\0!VO~PJ-Y?uwk@c:RV,Rn\"CT", 00:15:06.272 "method": "nvmf_create_subsystem", 00:15:06.272 "req_id": 1 00:15:06.272 } 00:15:06.272 Got JSON-RPC error response 00:15:06.272 response: 00:15:06.272 { 00:15:06.272 "code": -32602, 00:15:06.272 "message": "Invalid MN g}Y/tn7sG?zO)Ir{\\0!VO~PJ-Y?uwk@c:RV,Rn\"CT" 00:15:06.272 }' 00:15:06.272 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:15:06.272 { 00:15:06.272 "nqn": "nqn.2016-06.io.spdk:cnode11760", 00:15:06.272 "model_number": "g}Y/tn7sG?zO)Ir{\\0!VO~PJ-Y?uwk@c:RV,Rn\"CT", 00:15:06.272 "method": "nvmf_create_subsystem", 00:15:06.272 "req_id": 1 00:15:06.272 } 00:15:06.272 Got JSON-RPC error response 00:15:06.272 response: 00:15:06.272 { 00:15:06.272 "code": -32602, 00:15:06.273 "message": "Invalid MN g}Y/tn7sG?zO)Ir{\\0!VO~PJ-Y?uwk@c:RV,Rn\"CT" 00:15:06.273 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:06.273 13:25:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:15:06.530 [2024-10-07 13:25:48.030670] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:06.530 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:06.786 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:15:06.786 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:15:06.786 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:15:06.786 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:15:06.786 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:15:07.043 [2024-10-07 13:25:48.576434] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:07.043 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:15:07.043 { 00:15:07.043 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:07.043 "listen_address": { 00:15:07.043 "trtype": "tcp", 00:15:07.043 "traddr": "", 00:15:07.043 "trsvcid": "4421" 00:15:07.043 }, 00:15:07.043 "method": "nvmf_subsystem_remove_listener", 00:15:07.043 "req_id": 1 00:15:07.043 } 00:15:07.043 Got JSON-RPC error response 00:15:07.043 response: 00:15:07.043 { 00:15:07.043 "code": -32602, 00:15:07.043 "message": "Invalid parameters" 00:15:07.043 }' 00:15:07.043 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:15:07.043 { 00:15:07.043 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:07.043 "listen_address": { 00:15:07.043 "trtype": "tcp", 00:15:07.043 "traddr": "", 00:15:07.043 "trsvcid": "4421" 00:15:07.043 }, 00:15:07.043 "method": "nvmf_subsystem_remove_listener", 00:15:07.043 "req_id": 1 00:15:07.043 } 00:15:07.043 Got JSON-RPC error response 00:15:07.043 response: 00:15:07.043 { 00:15:07.043 "code": -32602, 00:15:07.043 "message": "Invalid parameters" 00:15:07.043 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:07.043 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17655 -i 0 00:15:07.300 [2024-10-07 13:25:48.837254] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17655: invalid cntlid range [0-65519] 00:15:07.300 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:15:07.300 { 00:15:07.300 "nqn": "nqn.2016-06.io.spdk:cnode17655", 00:15:07.300 "min_cntlid": 0, 00:15:07.300 "method": "nvmf_create_subsystem", 00:15:07.300 "req_id": 1 00:15:07.300 } 00:15:07.300 Got JSON-RPC error response 00:15:07.300 response: 00:15:07.300 { 00:15:07.300 "code": -32602, 00:15:07.300 "message": "Invalid cntlid range [0-65519]" 00:15:07.300 }' 00:15:07.300 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:15:07.300 { 00:15:07.300 "nqn": "nqn.2016-06.io.spdk:cnode17655", 00:15:07.300 "min_cntlid": 0, 00:15:07.300 "method": "nvmf_create_subsystem", 00:15:07.300 "req_id": 1 00:15:07.300 } 00:15:07.300 Got JSON-RPC error response 00:15:07.300 response: 00:15:07.300 { 00:15:07.300 "code": -32602, 00:15:07.300 "message": "Invalid cntlid range [0-65519]" 00:15:07.300 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:07.300 13:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13298 -i 65520 00:15:07.559 [2024-10-07 13:25:49.118178] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13298: invalid cntlid range [65520-65519] 00:15:07.559 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:15:07.559 { 00:15:07.559 "nqn": "nqn.2016-06.io.spdk:cnode13298", 00:15:07.559 "min_cntlid": 65520, 00:15:07.559 "method": "nvmf_create_subsystem", 00:15:07.559 "req_id": 1 00:15:07.559 } 00:15:07.559 Got JSON-RPC error response 00:15:07.559 response: 00:15:07.559 { 00:15:07.559 "code": -32602, 00:15:07.559 "message": "Invalid cntlid range [65520-65519]" 00:15:07.559 }' 00:15:07.559 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:15:07.559 { 00:15:07.559 "nqn": "nqn.2016-06.io.spdk:cnode13298", 00:15:07.559 "min_cntlid": 65520, 00:15:07.559 "method": "nvmf_create_subsystem", 00:15:07.559 "req_id": 1 00:15:07.559 } 00:15:07.559 Got JSON-RPC error response 00:15:07.559 response: 00:15:07.559 { 00:15:07.559 "code": -32602, 00:15:07.559 "message": "Invalid cntlid range [65520-65519]" 00:15:07.559 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:07.559 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14741 -I 0 00:15:07.817 [2024-10-07 13:25:49.399111] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14741: invalid cntlid range [1-0] 00:15:07.817 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:15:07.817 { 00:15:07.817 "nqn": "nqn.2016-06.io.spdk:cnode14741", 00:15:07.817 "max_cntlid": 0, 00:15:07.817 "method": "nvmf_create_subsystem", 00:15:07.817 "req_id": 1 00:15:07.817 } 00:15:07.817 Got JSON-RPC error response 00:15:07.817 response: 00:15:07.817 { 00:15:07.817 "code": -32602, 00:15:07.817 "message": "Invalid cntlid range [1-0]" 00:15:07.817 }' 00:15:07.817 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:15:07.817 { 00:15:07.817 "nqn": "nqn.2016-06.io.spdk:cnode14741", 00:15:07.817 "max_cntlid": 0, 00:15:07.817 "method": "nvmf_create_subsystem", 00:15:07.817 "req_id": 1 00:15:07.817 } 00:15:07.817 Got JSON-RPC error response 00:15:07.817 response: 00:15:07.817 { 00:15:07.817 "code": -32602, 00:15:07.817 "message": "Invalid cntlid range [1-0]" 00:15:07.817 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:07.817 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21802 -I 65520 00:15:08.074 [2024-10-07 13:25:49.664009] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21802: invalid cntlid range [1-65520] 00:15:08.074 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:15:08.074 { 00:15:08.074 "nqn": "nqn.2016-06.io.spdk:cnode21802", 00:15:08.074 "max_cntlid": 65520, 00:15:08.074 "method": "nvmf_create_subsystem", 00:15:08.074 "req_id": 1 00:15:08.074 } 00:15:08.074 Got JSON-RPC error response 00:15:08.074 response: 00:15:08.074 { 00:15:08.074 "code": -32602, 00:15:08.074 "message": "Invalid cntlid range [1-65520]" 00:15:08.074 }' 00:15:08.074 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:15:08.074 { 00:15:08.074 "nqn": "nqn.2016-06.io.spdk:cnode21802", 00:15:08.074 "max_cntlid": 65520, 00:15:08.074 "method": "nvmf_create_subsystem", 00:15:08.074 "req_id": 1 00:15:08.074 } 00:15:08.074 Got JSON-RPC error response 00:15:08.074 response: 00:15:08.074 { 00:15:08.074 "code": -32602, 00:15:08.074 "message": "Invalid cntlid range [1-65520]" 00:15:08.074 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:08.074 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7579 -i 6 -I 5 00:15:08.332 [2024-10-07 13:25:49.928912] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7579: invalid cntlid range [6-5] 00:15:08.332 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:15:08.332 { 00:15:08.332 "nqn": "nqn.2016-06.io.spdk:cnode7579", 00:15:08.332 "min_cntlid": 6, 00:15:08.332 "max_cntlid": 5, 00:15:08.332 "method": "nvmf_create_subsystem", 00:15:08.332 "req_id": 1 00:15:08.332 } 00:15:08.332 Got JSON-RPC error response 00:15:08.332 response: 00:15:08.332 { 00:15:08.332 "code": -32602, 00:15:08.332 "message": "Invalid cntlid range [6-5]" 00:15:08.332 }' 00:15:08.332 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:15:08.332 { 00:15:08.332 "nqn": "nqn.2016-06.io.spdk:cnode7579", 00:15:08.332 "min_cntlid": 6, 00:15:08.332 "max_cntlid": 5, 00:15:08.332 "method": "nvmf_create_subsystem", 00:15:08.332 "req_id": 1 00:15:08.332 } 00:15:08.332 Got JSON-RPC error response 00:15:08.332 response: 00:15:08.332 { 00:15:08.332 "code": -32602, 00:15:08.332 "message": "Invalid cntlid range [6-5]" 00:15:08.333 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:08.333 13:25:49 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:15:08.593 { 00:15:08.593 "name": "foobar", 00:15:08.593 "method": "nvmf_delete_target", 00:15:08.593 "req_id": 1 00:15:08.593 } 00:15:08.593 Got JSON-RPC error response 00:15:08.593 response: 00:15:08.593 { 00:15:08.593 "code": -32602, 00:15:08.593 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:08.593 }' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:15:08.593 { 00:15:08.593 "name": "foobar", 00:15:08.593 "method": "nvmf_delete_target", 00:15:08.593 "req_id": 1 00:15:08.593 } 00:15:08.593 Got JSON-RPC error response 00:15:08.593 response: 00:15:08.593 { 00:15:08.593 "code": -32602, 00:15:08.593 "message": "The specified target doesn't exist, cannot delete it." 00:15:08.593 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:08.593 rmmod nvme_tcp 00:15:08.593 rmmod nvme_fabrics 00:15:08.593 rmmod nvme_keyring 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@515 -- # '[' -n 1767874 ']' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # killprocess 1767874 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 1767874 ']' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 1767874 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1767874 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1767874' 00:15:08.593 killing process with pid 1767874 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 1767874 00:15:08.593 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 1767874 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-save 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@789 -- # iptables-restore 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.852 13:25:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.791 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:11.050 00:15:11.050 real 0m9.108s 00:15:11.050 user 0m21.939s 00:15:11.050 sys 0m2.436s 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:11.050 ************************************ 00:15:11.050 END TEST nvmf_invalid 00:15:11.050 ************************************ 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:11.050 ************************************ 00:15:11.050 START TEST nvmf_connect_stress 00:15:11.050 ************************************ 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:11.050 * Looking for test storage... 00:15:11.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lcov --version 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:11.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.050 --rc genhtml_branch_coverage=1 00:15:11.050 --rc genhtml_function_coverage=1 00:15:11.050 --rc genhtml_legend=1 00:15:11.050 --rc geninfo_all_blocks=1 00:15:11.050 --rc geninfo_unexecuted_blocks=1 00:15:11.050 00:15:11.050 ' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:11.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.050 --rc genhtml_branch_coverage=1 00:15:11.050 --rc genhtml_function_coverage=1 00:15:11.050 --rc genhtml_legend=1 00:15:11.050 --rc geninfo_all_blocks=1 00:15:11.050 --rc geninfo_unexecuted_blocks=1 00:15:11.050 00:15:11.050 ' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:11.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.050 --rc genhtml_branch_coverage=1 00:15:11.050 --rc genhtml_function_coverage=1 00:15:11.050 --rc genhtml_legend=1 00:15:11.050 --rc geninfo_all_blocks=1 00:15:11.050 --rc geninfo_unexecuted_blocks=1 00:15:11.050 00:15:11.050 ' 00:15:11.050 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:11.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.050 --rc genhtml_branch_coverage=1 00:15:11.050 --rc genhtml_function_coverage=1 00:15:11.050 --rc genhtml_legend=1 00:15:11.069 --rc geninfo_all_blocks=1 00:15:11.069 --rc geninfo_unexecuted_blocks=1 00:15:11.069 00:15:11.069 ' 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:11.069 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:11.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:11.070 13:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:15:13.603 Found 0000:09:00.0 (0x8086 - 0x1592) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:15:13.603 Found 0000:09:00.1 (0x8086 - 0x1592) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:13.603 Found net devices under 0000:09:00.0: cvl_0_0 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:13.603 Found net devices under 0000:09:00.1: cvl_0_1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # is_hw=yes 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:13.603 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:13.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:13.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:15:13.604 00:15:13.604 --- 10.0.0.2 ping statistics --- 00:15:13.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.604 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:13.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:13.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:15:13.604 00:15:13.604 --- 10.0.0.1 ping statistics --- 00:15:13.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:13.604 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # return 0 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # nvmfpid=1770430 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # waitforlisten 1770430 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 1770430 ']' 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:13.604 13:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 [2024-10-07 13:25:54.942317] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:13.604 [2024-10-07 13:25:54.942400] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.604 [2024-10-07 13:25:55.003989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:13.604 [2024-10-07 13:25:55.107793] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.604 [2024-10-07 13:25:55.107847] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.604 [2024-10-07 13:25:55.107877] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:13.604 [2024-10-07 13:25:55.107889] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:13.604 [2024-10-07 13:25:55.107899] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.604 [2024-10-07 13:25:55.108715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:13.604 [2024-10-07 13:25:55.108791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:13.604 [2024-10-07 13:25:55.108795] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 [2024-10-07 13:25:55.254857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 [2024-10-07 13:25:55.287860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:13.604 NULL1 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1770456 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.604 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.863 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.864 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.122 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.122 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:14.122 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.122 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.122 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.380 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.380 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:14.380 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.380 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.380 13:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.640 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.640 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:14.640 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:14.640 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.640 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.207 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.207 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:15.207 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.207 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.207 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.464 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.464 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:15.464 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.464 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.464 13:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.722 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.722 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:15.722 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.722 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.722 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.980 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.980 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:15.980 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.980 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.980 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.262 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.262 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:16.262 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.262 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.262 13:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.837 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.837 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:16.837 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.837 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.837 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.096 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.096 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:17.096 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.096 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.096 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.354 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.354 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:17.354 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.354 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.354 13:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.615 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.615 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:17.615 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.615 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.615 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.874 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.874 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:17.874 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.874 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.874 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.442 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.442 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:18.442 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.442 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.442 13:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.700 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.700 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:18.700 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.700 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.700 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.959 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.959 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:18.960 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.960 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.960 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.219 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.219 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:19.219 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.219 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.219 13:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.480 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.480 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:19.480 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.480 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.480 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.048 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.048 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:20.048 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.048 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.048 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.308 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.308 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:20.308 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.308 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.308 13:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.568 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.568 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:20.568 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.568 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.568 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.826 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.826 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:20.826 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.826 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.826 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.105 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.105 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:21.105 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.105 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.105 13:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.364 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.364 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:21.364 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.364 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.364 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.932 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.932 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:21.932 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.932 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.932 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.192 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.192 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:22.192 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.192 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.192 13:26:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.450 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.450 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:22.450 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.450 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.450 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.708 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.708 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:22.708 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.708 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.708 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.968 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.968 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:22.968 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.968 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.968 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.536 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.536 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:23.536 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.536 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.536 13:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.796 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.796 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:23.796 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.796 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.796 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.796 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1770456 00:15:24.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1770456) - No such process 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1770456 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:24.055 rmmod nvme_tcp 00:15:24.055 rmmod nvme_fabrics 00:15:24.055 rmmod nvme_keyring 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@515 -- # '[' -n 1770430 ']' 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # killprocess 1770430 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 1770430 ']' 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 1770430 00:15:24.055 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1770430 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1770430' 00:15:24.056 killing process with pid 1770430 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 1770430 00:15:24.056 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 1770430 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-save 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@789 -- # iptables-restore 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:24.314 13:26:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:26.852 00:15:26.852 real 0m15.471s 00:15:26.852 user 0m38.724s 00:15:26.852 sys 0m5.795s 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.852 ************************************ 00:15:26.852 END TEST nvmf_connect_stress 00:15:26.852 ************************************ 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:26.852 ************************************ 00:15:26.852 START TEST nvmf_fused_ordering 00:15:26.852 ************************************ 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:26.852 * Looking for test storage... 00:15:26.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lcov --version 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.852 --rc genhtml_branch_coverage=1 00:15:26.852 --rc genhtml_function_coverage=1 00:15:26.852 --rc genhtml_legend=1 00:15:26.852 --rc geninfo_all_blocks=1 00:15:26.852 --rc geninfo_unexecuted_blocks=1 00:15:26.852 00:15:26.852 ' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.852 --rc genhtml_branch_coverage=1 00:15:26.852 --rc genhtml_function_coverage=1 00:15:26.852 --rc genhtml_legend=1 00:15:26.852 --rc geninfo_all_blocks=1 00:15:26.852 --rc geninfo_unexecuted_blocks=1 00:15:26.852 00:15:26.852 ' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.852 --rc genhtml_branch_coverage=1 00:15:26.852 --rc genhtml_function_coverage=1 00:15:26.852 --rc genhtml_legend=1 00:15:26.852 --rc geninfo_all_blocks=1 00:15:26.852 --rc geninfo_unexecuted_blocks=1 00:15:26.852 00:15:26.852 ' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:26.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:26.852 --rc genhtml_branch_coverage=1 00:15:26.852 --rc genhtml_function_coverage=1 00:15:26.852 --rc genhtml_legend=1 00:15:26.852 --rc geninfo_all_blocks=1 00:15:26.852 --rc geninfo_unexecuted_blocks=1 00:15:26.852 00:15:26.852 ' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:26.852 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:26.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:26.853 13:26:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:28.759 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:15:28.760 Found 0000:09:00.0 (0x8086 - 0x1592) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:15:28.760 Found 0000:09:00.1 (0x8086 - 0x1592) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:28.760 Found net devices under 0000:09:00.0: cvl_0_0 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:28.760 Found net devices under 0000:09:00.1: cvl_0_1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # is_hw=yes 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:28.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:28.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:15:28.760 00:15:28.760 --- 10.0.0.2 ping statistics --- 00:15:28.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.760 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:28.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:28.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:15:28.760 00:15:28.760 --- 10.0.0.1 ping statistics --- 00:15:28.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:28.760 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # return 0 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:28.760 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # nvmfpid=1773527 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # waitforlisten 1773527 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 1773527 ']' 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:28.761 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:28.761 [2024-10-07 13:26:10.456233] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:28.761 [2024-10-07 13:26:10.456341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:29.020 [2024-10-07 13:26:10.518573] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.020 [2024-10-07 13:26:10.621754] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:29.020 [2024-10-07 13:26:10.621804] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:29.020 [2024-10-07 13:26:10.621817] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:29.020 [2024-10-07 13:26:10.621828] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:29.020 [2024-10-07 13:26:10.621837] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:29.020 [2024-10-07 13:26:10.622381] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.020 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:29.020 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 [2024-10-07 13:26:10.762542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 [2024-10-07 13:26:10.778813] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 NULL1 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.279 13:26:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:29.279 [2024-10-07 13:26:10.822358] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:29.279 [2024-10-07 13:26:10.822393] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1773597 ] 00:15:29.540 Attached to nqn.2016-06.io.spdk:cnode1 00:15:29.540 Namespace ID: 1 size: 1GB 00:15:29.540 fused_ordering(0) 00:15:29.540 fused_ordering(1) 00:15:29.540 fused_ordering(2) 00:15:29.540 fused_ordering(3) 00:15:29.540 fused_ordering(4) 00:15:29.540 fused_ordering(5) 00:15:29.540 fused_ordering(6) 00:15:29.540 fused_ordering(7) 00:15:29.540 fused_ordering(8) 00:15:29.540 fused_ordering(9) 00:15:29.540 fused_ordering(10) 00:15:29.540 fused_ordering(11) 00:15:29.540 fused_ordering(12) 00:15:29.540 fused_ordering(13) 00:15:29.540 fused_ordering(14) 00:15:29.540 fused_ordering(15) 00:15:29.540 fused_ordering(16) 00:15:29.540 fused_ordering(17) 00:15:29.540 fused_ordering(18) 00:15:29.540 fused_ordering(19) 00:15:29.540 fused_ordering(20) 00:15:29.540 fused_ordering(21) 00:15:29.540 fused_ordering(22) 00:15:29.540 fused_ordering(23) 00:15:29.540 fused_ordering(24) 00:15:29.540 fused_ordering(25) 00:15:29.540 fused_ordering(26) 00:15:29.540 fused_ordering(27) 00:15:29.540 fused_ordering(28) 00:15:29.540 fused_ordering(29) 00:15:29.540 fused_ordering(30) 00:15:29.540 fused_ordering(31) 00:15:29.540 fused_ordering(32) 00:15:29.540 fused_ordering(33) 00:15:29.540 fused_ordering(34) 00:15:29.540 fused_ordering(35) 00:15:29.540 fused_ordering(36) 00:15:29.540 fused_ordering(37) 00:15:29.540 fused_ordering(38) 00:15:29.540 fused_ordering(39) 00:15:29.540 fused_ordering(40) 00:15:29.540 fused_ordering(41) 00:15:29.540 fused_ordering(42) 00:15:29.540 fused_ordering(43) 00:15:29.540 fused_ordering(44) 00:15:29.540 fused_ordering(45) 00:15:29.540 fused_ordering(46) 00:15:29.540 fused_ordering(47) 00:15:29.540 fused_ordering(48) 00:15:29.540 fused_ordering(49) 00:15:29.540 fused_ordering(50) 00:15:29.540 fused_ordering(51) 00:15:29.540 fused_ordering(52) 00:15:29.540 fused_ordering(53) 00:15:29.540 fused_ordering(54) 00:15:29.540 fused_ordering(55) 00:15:29.540 fused_ordering(56) 00:15:29.540 fused_ordering(57) 00:15:29.540 fused_ordering(58) 00:15:29.540 fused_ordering(59) 00:15:29.540 fused_ordering(60) 00:15:29.540 fused_ordering(61) 00:15:29.540 fused_ordering(62) 00:15:29.540 fused_ordering(63) 00:15:29.540 fused_ordering(64) 00:15:29.540 fused_ordering(65) 00:15:29.540 fused_ordering(66) 00:15:29.540 fused_ordering(67) 00:15:29.540 fused_ordering(68) 00:15:29.540 fused_ordering(69) 00:15:29.540 fused_ordering(70) 00:15:29.540 fused_ordering(71) 00:15:29.540 fused_ordering(72) 00:15:29.540 fused_ordering(73) 00:15:29.540 fused_ordering(74) 00:15:29.540 fused_ordering(75) 00:15:29.540 fused_ordering(76) 00:15:29.540 fused_ordering(77) 00:15:29.540 fused_ordering(78) 00:15:29.540 fused_ordering(79) 00:15:29.540 fused_ordering(80) 00:15:29.540 fused_ordering(81) 00:15:29.540 fused_ordering(82) 00:15:29.540 fused_ordering(83) 00:15:29.540 fused_ordering(84) 00:15:29.540 fused_ordering(85) 00:15:29.540 fused_ordering(86) 00:15:29.540 fused_ordering(87) 00:15:29.540 fused_ordering(88) 00:15:29.540 fused_ordering(89) 00:15:29.540 fused_ordering(90) 00:15:29.540 fused_ordering(91) 00:15:29.540 fused_ordering(92) 00:15:29.540 fused_ordering(93) 00:15:29.540 fused_ordering(94) 00:15:29.540 fused_ordering(95) 00:15:29.540 fused_ordering(96) 00:15:29.540 fused_ordering(97) 00:15:29.540 fused_ordering(98) 00:15:29.540 fused_ordering(99) 00:15:29.540 fused_ordering(100) 00:15:29.540 fused_ordering(101) 00:15:29.540 fused_ordering(102) 00:15:29.540 fused_ordering(103) 00:15:29.540 fused_ordering(104) 00:15:29.540 fused_ordering(105) 00:15:29.540 fused_ordering(106) 00:15:29.540 fused_ordering(107) 00:15:29.540 fused_ordering(108) 00:15:29.540 fused_ordering(109) 00:15:29.540 fused_ordering(110) 00:15:29.540 fused_ordering(111) 00:15:29.540 fused_ordering(112) 00:15:29.540 fused_ordering(113) 00:15:29.540 fused_ordering(114) 00:15:29.540 fused_ordering(115) 00:15:29.540 fused_ordering(116) 00:15:29.540 fused_ordering(117) 00:15:29.540 fused_ordering(118) 00:15:29.540 fused_ordering(119) 00:15:29.540 fused_ordering(120) 00:15:29.540 fused_ordering(121) 00:15:29.540 fused_ordering(122) 00:15:29.540 fused_ordering(123) 00:15:29.540 fused_ordering(124) 00:15:29.540 fused_ordering(125) 00:15:29.540 fused_ordering(126) 00:15:29.540 fused_ordering(127) 00:15:29.540 fused_ordering(128) 00:15:29.540 fused_ordering(129) 00:15:29.540 fused_ordering(130) 00:15:29.540 fused_ordering(131) 00:15:29.540 fused_ordering(132) 00:15:29.540 fused_ordering(133) 00:15:29.540 fused_ordering(134) 00:15:29.540 fused_ordering(135) 00:15:29.540 fused_ordering(136) 00:15:29.540 fused_ordering(137) 00:15:29.540 fused_ordering(138) 00:15:29.540 fused_ordering(139) 00:15:29.540 fused_ordering(140) 00:15:29.540 fused_ordering(141) 00:15:29.540 fused_ordering(142) 00:15:29.540 fused_ordering(143) 00:15:29.540 fused_ordering(144) 00:15:29.540 fused_ordering(145) 00:15:29.540 fused_ordering(146) 00:15:29.540 fused_ordering(147) 00:15:29.540 fused_ordering(148) 00:15:29.540 fused_ordering(149) 00:15:29.540 fused_ordering(150) 00:15:29.540 fused_ordering(151) 00:15:29.540 fused_ordering(152) 00:15:29.540 fused_ordering(153) 00:15:29.540 fused_ordering(154) 00:15:29.540 fused_ordering(155) 00:15:29.540 fused_ordering(156) 00:15:29.540 fused_ordering(157) 00:15:29.540 fused_ordering(158) 00:15:29.540 fused_ordering(159) 00:15:29.540 fused_ordering(160) 00:15:29.540 fused_ordering(161) 00:15:29.540 fused_ordering(162) 00:15:29.540 fused_ordering(163) 00:15:29.540 fused_ordering(164) 00:15:29.540 fused_ordering(165) 00:15:29.540 fused_ordering(166) 00:15:29.540 fused_ordering(167) 00:15:29.540 fused_ordering(168) 00:15:29.540 fused_ordering(169) 00:15:29.540 fused_ordering(170) 00:15:29.540 fused_ordering(171) 00:15:29.540 fused_ordering(172) 00:15:29.540 fused_ordering(173) 00:15:29.540 fused_ordering(174) 00:15:29.540 fused_ordering(175) 00:15:29.540 fused_ordering(176) 00:15:29.540 fused_ordering(177) 00:15:29.540 fused_ordering(178) 00:15:29.540 fused_ordering(179) 00:15:29.540 fused_ordering(180) 00:15:29.540 fused_ordering(181) 00:15:29.540 fused_ordering(182) 00:15:29.540 fused_ordering(183) 00:15:29.540 fused_ordering(184) 00:15:29.540 fused_ordering(185) 00:15:29.540 fused_ordering(186) 00:15:29.540 fused_ordering(187) 00:15:29.540 fused_ordering(188) 00:15:29.540 fused_ordering(189) 00:15:29.540 fused_ordering(190) 00:15:29.540 fused_ordering(191) 00:15:29.540 fused_ordering(192) 00:15:29.540 fused_ordering(193) 00:15:29.540 fused_ordering(194) 00:15:29.540 fused_ordering(195) 00:15:29.540 fused_ordering(196) 00:15:29.540 fused_ordering(197) 00:15:29.540 fused_ordering(198) 00:15:29.540 fused_ordering(199) 00:15:29.540 fused_ordering(200) 00:15:29.540 fused_ordering(201) 00:15:29.540 fused_ordering(202) 00:15:29.540 fused_ordering(203) 00:15:29.540 fused_ordering(204) 00:15:29.540 fused_ordering(205) 00:15:30.108 fused_ordering(206) 00:15:30.108 fused_ordering(207) 00:15:30.108 fused_ordering(208) 00:15:30.108 fused_ordering(209) 00:15:30.108 fused_ordering(210) 00:15:30.108 fused_ordering(211) 00:15:30.108 fused_ordering(212) 00:15:30.108 fused_ordering(213) 00:15:30.108 fused_ordering(214) 00:15:30.108 fused_ordering(215) 00:15:30.108 fused_ordering(216) 00:15:30.108 fused_ordering(217) 00:15:30.108 fused_ordering(218) 00:15:30.108 fused_ordering(219) 00:15:30.108 fused_ordering(220) 00:15:30.108 fused_ordering(221) 00:15:30.108 fused_ordering(222) 00:15:30.108 fused_ordering(223) 00:15:30.108 fused_ordering(224) 00:15:30.108 fused_ordering(225) 00:15:30.108 fused_ordering(226) 00:15:30.108 fused_ordering(227) 00:15:30.108 fused_ordering(228) 00:15:30.108 fused_ordering(229) 00:15:30.108 fused_ordering(230) 00:15:30.108 fused_ordering(231) 00:15:30.108 fused_ordering(232) 00:15:30.108 fused_ordering(233) 00:15:30.108 fused_ordering(234) 00:15:30.108 fused_ordering(235) 00:15:30.108 fused_ordering(236) 00:15:30.108 fused_ordering(237) 00:15:30.108 fused_ordering(238) 00:15:30.108 fused_ordering(239) 00:15:30.108 fused_ordering(240) 00:15:30.108 fused_ordering(241) 00:15:30.108 fused_ordering(242) 00:15:30.108 fused_ordering(243) 00:15:30.108 fused_ordering(244) 00:15:30.108 fused_ordering(245) 00:15:30.108 fused_ordering(246) 00:15:30.108 fused_ordering(247) 00:15:30.108 fused_ordering(248) 00:15:30.108 fused_ordering(249) 00:15:30.108 fused_ordering(250) 00:15:30.108 fused_ordering(251) 00:15:30.108 fused_ordering(252) 00:15:30.108 fused_ordering(253) 00:15:30.108 fused_ordering(254) 00:15:30.108 fused_ordering(255) 00:15:30.108 fused_ordering(256) 00:15:30.108 fused_ordering(257) 00:15:30.108 fused_ordering(258) 00:15:30.108 fused_ordering(259) 00:15:30.108 fused_ordering(260) 00:15:30.108 fused_ordering(261) 00:15:30.108 fused_ordering(262) 00:15:30.108 fused_ordering(263) 00:15:30.108 fused_ordering(264) 00:15:30.108 fused_ordering(265) 00:15:30.108 fused_ordering(266) 00:15:30.108 fused_ordering(267) 00:15:30.108 fused_ordering(268) 00:15:30.108 fused_ordering(269) 00:15:30.108 fused_ordering(270) 00:15:30.108 fused_ordering(271) 00:15:30.108 fused_ordering(272) 00:15:30.108 fused_ordering(273) 00:15:30.108 fused_ordering(274) 00:15:30.108 fused_ordering(275) 00:15:30.108 fused_ordering(276) 00:15:30.108 fused_ordering(277) 00:15:30.108 fused_ordering(278) 00:15:30.108 fused_ordering(279) 00:15:30.108 fused_ordering(280) 00:15:30.108 fused_ordering(281) 00:15:30.108 fused_ordering(282) 00:15:30.108 fused_ordering(283) 00:15:30.108 fused_ordering(284) 00:15:30.108 fused_ordering(285) 00:15:30.108 fused_ordering(286) 00:15:30.108 fused_ordering(287) 00:15:30.108 fused_ordering(288) 00:15:30.108 fused_ordering(289) 00:15:30.108 fused_ordering(290) 00:15:30.108 fused_ordering(291) 00:15:30.108 fused_ordering(292) 00:15:30.108 fused_ordering(293) 00:15:30.108 fused_ordering(294) 00:15:30.108 fused_ordering(295) 00:15:30.108 fused_ordering(296) 00:15:30.108 fused_ordering(297) 00:15:30.108 fused_ordering(298) 00:15:30.108 fused_ordering(299) 00:15:30.108 fused_ordering(300) 00:15:30.109 fused_ordering(301) 00:15:30.109 fused_ordering(302) 00:15:30.109 fused_ordering(303) 00:15:30.109 fused_ordering(304) 00:15:30.109 fused_ordering(305) 00:15:30.109 fused_ordering(306) 00:15:30.109 fused_ordering(307) 00:15:30.109 fused_ordering(308) 00:15:30.109 fused_ordering(309) 00:15:30.109 fused_ordering(310) 00:15:30.109 fused_ordering(311) 00:15:30.109 fused_ordering(312) 00:15:30.109 fused_ordering(313) 00:15:30.109 fused_ordering(314) 00:15:30.109 fused_ordering(315) 00:15:30.109 fused_ordering(316) 00:15:30.109 fused_ordering(317) 00:15:30.109 fused_ordering(318) 00:15:30.109 fused_ordering(319) 00:15:30.109 fused_ordering(320) 00:15:30.109 fused_ordering(321) 00:15:30.109 fused_ordering(322) 00:15:30.109 fused_ordering(323) 00:15:30.109 fused_ordering(324) 00:15:30.109 fused_ordering(325) 00:15:30.109 fused_ordering(326) 00:15:30.109 fused_ordering(327) 00:15:30.109 fused_ordering(328) 00:15:30.109 fused_ordering(329) 00:15:30.109 fused_ordering(330) 00:15:30.109 fused_ordering(331) 00:15:30.109 fused_ordering(332) 00:15:30.109 fused_ordering(333) 00:15:30.109 fused_ordering(334) 00:15:30.109 fused_ordering(335) 00:15:30.109 fused_ordering(336) 00:15:30.109 fused_ordering(337) 00:15:30.109 fused_ordering(338) 00:15:30.109 fused_ordering(339) 00:15:30.109 fused_ordering(340) 00:15:30.109 fused_ordering(341) 00:15:30.109 fused_ordering(342) 00:15:30.109 fused_ordering(343) 00:15:30.109 fused_ordering(344) 00:15:30.109 fused_ordering(345) 00:15:30.109 fused_ordering(346) 00:15:30.109 fused_ordering(347) 00:15:30.109 fused_ordering(348) 00:15:30.109 fused_ordering(349) 00:15:30.109 fused_ordering(350) 00:15:30.109 fused_ordering(351) 00:15:30.109 fused_ordering(352) 00:15:30.109 fused_ordering(353) 00:15:30.109 fused_ordering(354) 00:15:30.109 fused_ordering(355) 00:15:30.109 fused_ordering(356) 00:15:30.109 fused_ordering(357) 00:15:30.109 fused_ordering(358) 00:15:30.109 fused_ordering(359) 00:15:30.109 fused_ordering(360) 00:15:30.109 fused_ordering(361) 00:15:30.109 fused_ordering(362) 00:15:30.109 fused_ordering(363) 00:15:30.109 fused_ordering(364) 00:15:30.109 fused_ordering(365) 00:15:30.109 fused_ordering(366) 00:15:30.109 fused_ordering(367) 00:15:30.109 fused_ordering(368) 00:15:30.109 fused_ordering(369) 00:15:30.109 fused_ordering(370) 00:15:30.109 fused_ordering(371) 00:15:30.109 fused_ordering(372) 00:15:30.109 fused_ordering(373) 00:15:30.109 fused_ordering(374) 00:15:30.109 fused_ordering(375) 00:15:30.109 fused_ordering(376) 00:15:30.109 fused_ordering(377) 00:15:30.109 fused_ordering(378) 00:15:30.109 fused_ordering(379) 00:15:30.109 fused_ordering(380) 00:15:30.109 fused_ordering(381) 00:15:30.109 fused_ordering(382) 00:15:30.109 fused_ordering(383) 00:15:30.109 fused_ordering(384) 00:15:30.109 fused_ordering(385) 00:15:30.109 fused_ordering(386) 00:15:30.109 fused_ordering(387) 00:15:30.109 fused_ordering(388) 00:15:30.109 fused_ordering(389) 00:15:30.109 fused_ordering(390) 00:15:30.109 fused_ordering(391) 00:15:30.109 fused_ordering(392) 00:15:30.109 fused_ordering(393) 00:15:30.109 fused_ordering(394) 00:15:30.109 fused_ordering(395) 00:15:30.109 fused_ordering(396) 00:15:30.109 fused_ordering(397) 00:15:30.109 fused_ordering(398) 00:15:30.109 fused_ordering(399) 00:15:30.109 fused_ordering(400) 00:15:30.109 fused_ordering(401) 00:15:30.109 fused_ordering(402) 00:15:30.109 fused_ordering(403) 00:15:30.109 fused_ordering(404) 00:15:30.109 fused_ordering(405) 00:15:30.109 fused_ordering(406) 00:15:30.109 fused_ordering(407) 00:15:30.109 fused_ordering(408) 00:15:30.109 fused_ordering(409) 00:15:30.109 fused_ordering(410) 00:15:30.369 fused_ordering(411) 00:15:30.369 fused_ordering(412) 00:15:30.369 fused_ordering(413) 00:15:30.369 fused_ordering(414) 00:15:30.369 fused_ordering(415) 00:15:30.369 fused_ordering(416) 00:15:30.369 fused_ordering(417) 00:15:30.369 fused_ordering(418) 00:15:30.369 fused_ordering(419) 00:15:30.369 fused_ordering(420) 00:15:30.369 fused_ordering(421) 00:15:30.369 fused_ordering(422) 00:15:30.369 fused_ordering(423) 00:15:30.369 fused_ordering(424) 00:15:30.369 fused_ordering(425) 00:15:30.369 fused_ordering(426) 00:15:30.369 fused_ordering(427) 00:15:30.369 fused_ordering(428) 00:15:30.369 fused_ordering(429) 00:15:30.369 fused_ordering(430) 00:15:30.369 fused_ordering(431) 00:15:30.369 fused_ordering(432) 00:15:30.369 fused_ordering(433) 00:15:30.369 fused_ordering(434) 00:15:30.369 fused_ordering(435) 00:15:30.369 fused_ordering(436) 00:15:30.369 fused_ordering(437) 00:15:30.369 fused_ordering(438) 00:15:30.369 fused_ordering(439) 00:15:30.369 fused_ordering(440) 00:15:30.369 fused_ordering(441) 00:15:30.369 fused_ordering(442) 00:15:30.369 fused_ordering(443) 00:15:30.369 fused_ordering(444) 00:15:30.369 fused_ordering(445) 00:15:30.369 fused_ordering(446) 00:15:30.369 fused_ordering(447) 00:15:30.369 fused_ordering(448) 00:15:30.369 fused_ordering(449) 00:15:30.369 fused_ordering(450) 00:15:30.369 fused_ordering(451) 00:15:30.369 fused_ordering(452) 00:15:30.369 fused_ordering(453) 00:15:30.369 fused_ordering(454) 00:15:30.369 fused_ordering(455) 00:15:30.369 fused_ordering(456) 00:15:30.369 fused_ordering(457) 00:15:30.369 fused_ordering(458) 00:15:30.369 fused_ordering(459) 00:15:30.369 fused_ordering(460) 00:15:30.369 fused_ordering(461) 00:15:30.369 fused_ordering(462) 00:15:30.369 fused_ordering(463) 00:15:30.369 fused_ordering(464) 00:15:30.369 fused_ordering(465) 00:15:30.369 fused_ordering(466) 00:15:30.369 fused_ordering(467) 00:15:30.369 fused_ordering(468) 00:15:30.369 fused_ordering(469) 00:15:30.369 fused_ordering(470) 00:15:30.369 fused_ordering(471) 00:15:30.369 fused_ordering(472) 00:15:30.369 fused_ordering(473) 00:15:30.369 fused_ordering(474) 00:15:30.369 fused_ordering(475) 00:15:30.369 fused_ordering(476) 00:15:30.369 fused_ordering(477) 00:15:30.369 fused_ordering(478) 00:15:30.369 fused_ordering(479) 00:15:30.369 fused_ordering(480) 00:15:30.369 fused_ordering(481) 00:15:30.369 fused_ordering(482) 00:15:30.369 fused_ordering(483) 00:15:30.369 fused_ordering(484) 00:15:30.369 fused_ordering(485) 00:15:30.369 fused_ordering(486) 00:15:30.369 fused_ordering(487) 00:15:30.369 fused_ordering(488) 00:15:30.369 fused_ordering(489) 00:15:30.369 fused_ordering(490) 00:15:30.369 fused_ordering(491) 00:15:30.369 fused_ordering(492) 00:15:30.369 fused_ordering(493) 00:15:30.369 fused_ordering(494) 00:15:30.369 fused_ordering(495) 00:15:30.369 fused_ordering(496) 00:15:30.369 fused_ordering(497) 00:15:30.369 fused_ordering(498) 00:15:30.369 fused_ordering(499) 00:15:30.369 fused_ordering(500) 00:15:30.369 fused_ordering(501) 00:15:30.369 fused_ordering(502) 00:15:30.369 fused_ordering(503) 00:15:30.369 fused_ordering(504) 00:15:30.369 fused_ordering(505) 00:15:30.369 fused_ordering(506) 00:15:30.369 fused_ordering(507) 00:15:30.369 fused_ordering(508) 00:15:30.369 fused_ordering(509) 00:15:30.369 fused_ordering(510) 00:15:30.369 fused_ordering(511) 00:15:30.369 fused_ordering(512) 00:15:30.369 fused_ordering(513) 00:15:30.369 fused_ordering(514) 00:15:30.369 fused_ordering(515) 00:15:30.369 fused_ordering(516) 00:15:30.369 fused_ordering(517) 00:15:30.369 fused_ordering(518) 00:15:30.369 fused_ordering(519) 00:15:30.369 fused_ordering(520) 00:15:30.369 fused_ordering(521) 00:15:30.369 fused_ordering(522) 00:15:30.369 fused_ordering(523) 00:15:30.369 fused_ordering(524) 00:15:30.369 fused_ordering(525) 00:15:30.369 fused_ordering(526) 00:15:30.369 fused_ordering(527) 00:15:30.369 fused_ordering(528) 00:15:30.369 fused_ordering(529) 00:15:30.369 fused_ordering(530) 00:15:30.369 fused_ordering(531) 00:15:30.369 fused_ordering(532) 00:15:30.369 fused_ordering(533) 00:15:30.369 fused_ordering(534) 00:15:30.369 fused_ordering(535) 00:15:30.369 fused_ordering(536) 00:15:30.369 fused_ordering(537) 00:15:30.369 fused_ordering(538) 00:15:30.369 fused_ordering(539) 00:15:30.369 fused_ordering(540) 00:15:30.369 fused_ordering(541) 00:15:30.369 fused_ordering(542) 00:15:30.369 fused_ordering(543) 00:15:30.369 fused_ordering(544) 00:15:30.369 fused_ordering(545) 00:15:30.369 fused_ordering(546) 00:15:30.369 fused_ordering(547) 00:15:30.369 fused_ordering(548) 00:15:30.369 fused_ordering(549) 00:15:30.369 fused_ordering(550) 00:15:30.369 fused_ordering(551) 00:15:30.369 fused_ordering(552) 00:15:30.369 fused_ordering(553) 00:15:30.369 fused_ordering(554) 00:15:30.369 fused_ordering(555) 00:15:30.369 fused_ordering(556) 00:15:30.369 fused_ordering(557) 00:15:30.369 fused_ordering(558) 00:15:30.369 fused_ordering(559) 00:15:30.369 fused_ordering(560) 00:15:30.369 fused_ordering(561) 00:15:30.369 fused_ordering(562) 00:15:30.369 fused_ordering(563) 00:15:30.369 fused_ordering(564) 00:15:30.369 fused_ordering(565) 00:15:30.369 fused_ordering(566) 00:15:30.369 fused_ordering(567) 00:15:30.369 fused_ordering(568) 00:15:30.369 fused_ordering(569) 00:15:30.369 fused_ordering(570) 00:15:30.369 fused_ordering(571) 00:15:30.369 fused_ordering(572) 00:15:30.369 fused_ordering(573) 00:15:30.369 fused_ordering(574) 00:15:30.369 fused_ordering(575) 00:15:30.369 fused_ordering(576) 00:15:30.369 fused_ordering(577) 00:15:30.369 fused_ordering(578) 00:15:30.369 fused_ordering(579) 00:15:30.369 fused_ordering(580) 00:15:30.369 fused_ordering(581) 00:15:30.369 fused_ordering(582) 00:15:30.369 fused_ordering(583) 00:15:30.369 fused_ordering(584) 00:15:30.369 fused_ordering(585) 00:15:30.369 fused_ordering(586) 00:15:30.369 fused_ordering(587) 00:15:30.369 fused_ordering(588) 00:15:30.369 fused_ordering(589) 00:15:30.369 fused_ordering(590) 00:15:30.369 fused_ordering(591) 00:15:30.369 fused_ordering(592) 00:15:30.369 fused_ordering(593) 00:15:30.369 fused_ordering(594) 00:15:30.369 fused_ordering(595) 00:15:30.370 fused_ordering(596) 00:15:30.370 fused_ordering(597) 00:15:30.370 fused_ordering(598) 00:15:30.370 fused_ordering(599) 00:15:30.370 fused_ordering(600) 00:15:30.370 fused_ordering(601) 00:15:30.370 fused_ordering(602) 00:15:30.370 fused_ordering(603) 00:15:30.370 fused_ordering(604) 00:15:30.370 fused_ordering(605) 00:15:30.370 fused_ordering(606) 00:15:30.370 fused_ordering(607) 00:15:30.370 fused_ordering(608) 00:15:30.370 fused_ordering(609) 00:15:30.370 fused_ordering(610) 00:15:30.370 fused_ordering(611) 00:15:30.370 fused_ordering(612) 00:15:30.370 fused_ordering(613) 00:15:30.370 fused_ordering(614) 00:15:30.370 fused_ordering(615) 00:15:30.939 fused_ordering(616) 00:15:30.939 fused_ordering(617) 00:15:30.939 fused_ordering(618) 00:15:30.939 fused_ordering(619) 00:15:30.939 fused_ordering(620) 00:15:30.939 fused_ordering(621) 00:15:30.939 fused_ordering(622) 00:15:30.939 fused_ordering(623) 00:15:30.939 fused_ordering(624) 00:15:30.939 fused_ordering(625) 00:15:30.939 fused_ordering(626) 00:15:30.939 fused_ordering(627) 00:15:30.939 fused_ordering(628) 00:15:30.939 fused_ordering(629) 00:15:30.939 fused_ordering(630) 00:15:30.939 fused_ordering(631) 00:15:30.939 fused_ordering(632) 00:15:30.939 fused_ordering(633) 00:15:30.939 fused_ordering(634) 00:15:30.939 fused_ordering(635) 00:15:30.939 fused_ordering(636) 00:15:30.939 fused_ordering(637) 00:15:30.939 fused_ordering(638) 00:15:30.939 fused_ordering(639) 00:15:30.939 fused_ordering(640) 00:15:30.939 fused_ordering(641) 00:15:30.939 fused_ordering(642) 00:15:30.939 fused_ordering(643) 00:15:30.939 fused_ordering(644) 00:15:30.939 fused_ordering(645) 00:15:30.939 fused_ordering(646) 00:15:30.939 fused_ordering(647) 00:15:30.939 fused_ordering(648) 00:15:30.939 fused_ordering(649) 00:15:30.939 fused_ordering(650) 00:15:30.939 fused_ordering(651) 00:15:30.939 fused_ordering(652) 00:15:30.939 fused_ordering(653) 00:15:30.939 fused_ordering(654) 00:15:30.939 fused_ordering(655) 00:15:30.939 fused_ordering(656) 00:15:30.939 fused_ordering(657) 00:15:30.939 fused_ordering(658) 00:15:30.939 fused_ordering(659) 00:15:30.939 fused_ordering(660) 00:15:30.939 fused_ordering(661) 00:15:30.939 fused_ordering(662) 00:15:30.939 fused_ordering(663) 00:15:30.939 fused_ordering(664) 00:15:30.939 fused_ordering(665) 00:15:30.939 fused_ordering(666) 00:15:30.939 fused_ordering(667) 00:15:30.939 fused_ordering(668) 00:15:30.939 fused_ordering(669) 00:15:30.939 fused_ordering(670) 00:15:30.939 fused_ordering(671) 00:15:30.939 fused_ordering(672) 00:15:30.939 fused_ordering(673) 00:15:30.939 fused_ordering(674) 00:15:30.939 fused_ordering(675) 00:15:30.939 fused_ordering(676) 00:15:30.939 fused_ordering(677) 00:15:30.939 fused_ordering(678) 00:15:30.939 fused_ordering(679) 00:15:30.939 fused_ordering(680) 00:15:30.939 fused_ordering(681) 00:15:30.939 fused_ordering(682) 00:15:30.939 fused_ordering(683) 00:15:30.939 fused_ordering(684) 00:15:30.939 fused_ordering(685) 00:15:30.939 fused_ordering(686) 00:15:30.939 fused_ordering(687) 00:15:30.939 fused_ordering(688) 00:15:30.939 fused_ordering(689) 00:15:30.939 fused_ordering(690) 00:15:30.939 fused_ordering(691) 00:15:30.939 fused_ordering(692) 00:15:30.939 fused_ordering(693) 00:15:30.939 fused_ordering(694) 00:15:30.939 fused_ordering(695) 00:15:30.939 fused_ordering(696) 00:15:30.939 fused_ordering(697) 00:15:30.939 fused_ordering(698) 00:15:30.939 fused_ordering(699) 00:15:30.939 fused_ordering(700) 00:15:30.939 fused_ordering(701) 00:15:30.939 fused_ordering(702) 00:15:30.939 fused_ordering(703) 00:15:30.939 fused_ordering(704) 00:15:30.939 fused_ordering(705) 00:15:30.939 fused_ordering(706) 00:15:30.939 fused_ordering(707) 00:15:30.939 fused_ordering(708) 00:15:30.939 fused_ordering(709) 00:15:30.939 fused_ordering(710) 00:15:30.939 fused_ordering(711) 00:15:30.939 fused_ordering(712) 00:15:30.939 fused_ordering(713) 00:15:30.939 fused_ordering(714) 00:15:30.939 fused_ordering(715) 00:15:30.939 fused_ordering(716) 00:15:30.939 fused_ordering(717) 00:15:30.939 fused_ordering(718) 00:15:30.939 fused_ordering(719) 00:15:30.939 fused_ordering(720) 00:15:30.939 fused_ordering(721) 00:15:30.939 fused_ordering(722) 00:15:30.939 fused_ordering(723) 00:15:30.939 fused_ordering(724) 00:15:30.939 fused_ordering(725) 00:15:30.939 fused_ordering(726) 00:15:30.939 fused_ordering(727) 00:15:30.939 fused_ordering(728) 00:15:30.939 fused_ordering(729) 00:15:30.939 fused_ordering(730) 00:15:30.939 fused_ordering(731) 00:15:30.939 fused_ordering(732) 00:15:30.939 fused_ordering(733) 00:15:30.939 fused_ordering(734) 00:15:30.939 fused_ordering(735) 00:15:30.939 fused_ordering(736) 00:15:30.939 fused_ordering(737) 00:15:30.939 fused_ordering(738) 00:15:30.939 fused_ordering(739) 00:15:30.939 fused_ordering(740) 00:15:30.939 fused_ordering(741) 00:15:30.939 fused_ordering(742) 00:15:30.939 fused_ordering(743) 00:15:30.939 fused_ordering(744) 00:15:30.939 fused_ordering(745) 00:15:30.939 fused_ordering(746) 00:15:30.939 fused_ordering(747) 00:15:30.939 fused_ordering(748) 00:15:30.939 fused_ordering(749) 00:15:30.939 fused_ordering(750) 00:15:30.939 fused_ordering(751) 00:15:30.939 fused_ordering(752) 00:15:30.939 fused_ordering(753) 00:15:30.939 fused_ordering(754) 00:15:30.939 fused_ordering(755) 00:15:30.939 fused_ordering(756) 00:15:30.939 fused_ordering(757) 00:15:30.939 fused_ordering(758) 00:15:30.939 fused_ordering(759) 00:15:30.939 fused_ordering(760) 00:15:30.939 fused_ordering(761) 00:15:30.939 fused_ordering(762) 00:15:30.939 fused_ordering(763) 00:15:30.939 fused_ordering(764) 00:15:30.939 fused_ordering(765) 00:15:30.939 fused_ordering(766) 00:15:30.939 fused_ordering(767) 00:15:30.939 fused_ordering(768) 00:15:30.939 fused_ordering(769) 00:15:30.939 fused_ordering(770) 00:15:30.939 fused_ordering(771) 00:15:30.940 fused_ordering(772) 00:15:30.940 fused_ordering(773) 00:15:30.940 fused_ordering(774) 00:15:30.940 fused_ordering(775) 00:15:30.940 fused_ordering(776) 00:15:30.940 fused_ordering(777) 00:15:30.940 fused_ordering(778) 00:15:30.940 fused_ordering(779) 00:15:30.940 fused_ordering(780) 00:15:30.940 fused_ordering(781) 00:15:30.940 fused_ordering(782) 00:15:30.940 fused_ordering(783) 00:15:30.940 fused_ordering(784) 00:15:30.940 fused_ordering(785) 00:15:30.940 fused_ordering(786) 00:15:30.940 fused_ordering(787) 00:15:30.940 fused_ordering(788) 00:15:30.940 fused_ordering(789) 00:15:30.940 fused_ordering(790) 00:15:30.940 fused_ordering(791) 00:15:30.940 fused_ordering(792) 00:15:30.940 fused_ordering(793) 00:15:30.940 fused_ordering(794) 00:15:30.940 fused_ordering(795) 00:15:30.940 fused_ordering(796) 00:15:30.940 fused_ordering(797) 00:15:30.940 fused_ordering(798) 00:15:30.940 fused_ordering(799) 00:15:30.940 fused_ordering(800) 00:15:30.940 fused_ordering(801) 00:15:30.940 fused_ordering(802) 00:15:30.940 fused_ordering(803) 00:15:30.940 fused_ordering(804) 00:15:30.940 fused_ordering(805) 00:15:30.940 fused_ordering(806) 00:15:30.940 fused_ordering(807) 00:15:30.940 fused_ordering(808) 00:15:30.940 fused_ordering(809) 00:15:30.940 fused_ordering(810) 00:15:30.940 fused_ordering(811) 00:15:30.940 fused_ordering(812) 00:15:30.940 fused_ordering(813) 00:15:30.940 fused_ordering(814) 00:15:30.940 fused_ordering(815) 00:15:30.940 fused_ordering(816) 00:15:30.940 fused_ordering(817) 00:15:30.940 fused_ordering(818) 00:15:30.940 fused_ordering(819) 00:15:30.940 fused_ordering(820) 00:15:31.530 fused_ordering(821) 00:15:31.530 fused_ordering(822) 00:15:31.530 fused_ordering(823) 00:15:31.530 fused_ordering(824) 00:15:31.530 fused_ordering(825) 00:15:31.530 fused_ordering(826) 00:15:31.530 fused_ordering(827) 00:15:31.530 fused_ordering(828) 00:15:31.530 fused_ordering(829) 00:15:31.530 fused_ordering(830) 00:15:31.530 fused_ordering(831) 00:15:31.530 fused_ordering(832) 00:15:31.530 fused_ordering(833) 00:15:31.530 fused_ordering(834) 00:15:31.530 fused_ordering(835) 00:15:31.530 fused_ordering(836) 00:15:31.530 fused_ordering(837) 00:15:31.530 fused_ordering(838) 00:15:31.530 fused_ordering(839) 00:15:31.530 fused_ordering(840) 00:15:31.530 fused_ordering(841) 00:15:31.530 fused_ordering(842) 00:15:31.530 fused_ordering(843) 00:15:31.530 fused_ordering(844) 00:15:31.530 fused_ordering(845) 00:15:31.530 fused_ordering(846) 00:15:31.530 fused_ordering(847) 00:15:31.530 fused_ordering(848) 00:15:31.530 fused_ordering(849) 00:15:31.530 fused_ordering(850) 00:15:31.530 fused_ordering(851) 00:15:31.530 fused_ordering(852) 00:15:31.530 fused_ordering(853) 00:15:31.530 fused_ordering(854) 00:15:31.530 fused_ordering(855) 00:15:31.530 fused_ordering(856) 00:15:31.530 fused_ordering(857) 00:15:31.530 fused_ordering(858) 00:15:31.530 fused_ordering(859) 00:15:31.530 fused_ordering(860) 00:15:31.530 fused_ordering(861) 00:15:31.530 fused_ordering(862) 00:15:31.530 fused_ordering(863) 00:15:31.530 fused_ordering(864) 00:15:31.530 fused_ordering(865) 00:15:31.530 fused_ordering(866) 00:15:31.530 fused_ordering(867) 00:15:31.530 fused_ordering(868) 00:15:31.530 fused_ordering(869) 00:15:31.530 fused_ordering(870) 00:15:31.530 fused_ordering(871) 00:15:31.530 fused_ordering(872) 00:15:31.530 fused_ordering(873) 00:15:31.530 fused_ordering(874) 00:15:31.530 fused_ordering(875) 00:15:31.530 fused_ordering(876) 00:15:31.530 fused_ordering(877) 00:15:31.530 fused_ordering(878) 00:15:31.530 fused_ordering(879) 00:15:31.531 fused_ordering(880) 00:15:31.531 fused_ordering(881) 00:15:31.531 fused_ordering(882) 00:15:31.531 fused_ordering(883) 00:15:31.531 fused_ordering(884) 00:15:31.531 fused_ordering(885) 00:15:31.531 fused_ordering(886) 00:15:31.531 fused_ordering(887) 00:15:31.531 fused_ordering(888) 00:15:31.531 fused_ordering(889) 00:15:31.531 fused_ordering(890) 00:15:31.531 fused_ordering(891) 00:15:31.531 fused_ordering(892) 00:15:31.531 fused_ordering(893) 00:15:31.531 fused_ordering(894) 00:15:31.531 fused_ordering(895) 00:15:31.531 fused_ordering(896) 00:15:31.531 fused_ordering(897) 00:15:31.531 fused_ordering(898) 00:15:31.531 fused_ordering(899) 00:15:31.531 fused_ordering(900) 00:15:31.531 fused_ordering(901) 00:15:31.531 fused_ordering(902) 00:15:31.531 fused_ordering(903) 00:15:31.531 fused_ordering(904) 00:15:31.531 fused_ordering(905) 00:15:31.531 fused_ordering(906) 00:15:31.531 fused_ordering(907) 00:15:31.531 fused_ordering(908) 00:15:31.531 fused_ordering(909) 00:15:31.531 fused_ordering(910) 00:15:31.531 fused_ordering(911) 00:15:31.531 fused_ordering(912) 00:15:31.531 fused_ordering(913) 00:15:31.531 fused_ordering(914) 00:15:31.531 fused_ordering(915) 00:15:31.531 fused_ordering(916) 00:15:31.531 fused_ordering(917) 00:15:31.531 fused_ordering(918) 00:15:31.531 fused_ordering(919) 00:15:31.531 fused_ordering(920) 00:15:31.531 fused_ordering(921) 00:15:31.531 fused_ordering(922) 00:15:31.531 fused_ordering(923) 00:15:31.531 fused_ordering(924) 00:15:31.531 fused_ordering(925) 00:15:31.531 fused_ordering(926) 00:15:31.531 fused_ordering(927) 00:15:31.531 fused_ordering(928) 00:15:31.531 fused_ordering(929) 00:15:31.531 fused_ordering(930) 00:15:31.531 fused_ordering(931) 00:15:31.531 fused_ordering(932) 00:15:31.531 fused_ordering(933) 00:15:31.531 fused_ordering(934) 00:15:31.531 fused_ordering(935) 00:15:31.531 fused_ordering(936) 00:15:31.531 fused_ordering(937) 00:15:31.531 fused_ordering(938) 00:15:31.531 fused_ordering(939) 00:15:31.531 fused_ordering(940) 00:15:31.531 fused_ordering(941) 00:15:31.531 fused_ordering(942) 00:15:31.531 fused_ordering(943) 00:15:31.531 fused_ordering(944) 00:15:31.531 fused_ordering(945) 00:15:31.531 fused_ordering(946) 00:15:31.531 fused_ordering(947) 00:15:31.531 fused_ordering(948) 00:15:31.531 fused_ordering(949) 00:15:31.531 fused_ordering(950) 00:15:31.531 fused_ordering(951) 00:15:31.531 fused_ordering(952) 00:15:31.531 fused_ordering(953) 00:15:31.531 fused_ordering(954) 00:15:31.531 fused_ordering(955) 00:15:31.531 fused_ordering(956) 00:15:31.531 fused_ordering(957) 00:15:31.531 fused_ordering(958) 00:15:31.531 fused_ordering(959) 00:15:31.531 fused_ordering(960) 00:15:31.531 fused_ordering(961) 00:15:31.531 fused_ordering(962) 00:15:31.531 fused_ordering(963) 00:15:31.531 fused_ordering(964) 00:15:31.531 fused_ordering(965) 00:15:31.531 fused_ordering(966) 00:15:31.531 fused_ordering(967) 00:15:31.531 fused_ordering(968) 00:15:31.531 fused_ordering(969) 00:15:31.531 fused_ordering(970) 00:15:31.531 fused_ordering(971) 00:15:31.531 fused_ordering(972) 00:15:31.531 fused_ordering(973) 00:15:31.531 fused_ordering(974) 00:15:31.531 fused_ordering(975) 00:15:31.531 fused_ordering(976) 00:15:31.531 fused_ordering(977) 00:15:31.531 fused_ordering(978) 00:15:31.531 fused_ordering(979) 00:15:31.531 fused_ordering(980) 00:15:31.531 fused_ordering(981) 00:15:31.531 fused_ordering(982) 00:15:31.531 fused_ordering(983) 00:15:31.531 fused_ordering(984) 00:15:31.531 fused_ordering(985) 00:15:31.531 fused_ordering(986) 00:15:31.531 fused_ordering(987) 00:15:31.531 fused_ordering(988) 00:15:31.531 fused_ordering(989) 00:15:31.531 fused_ordering(990) 00:15:31.531 fused_ordering(991) 00:15:31.531 fused_ordering(992) 00:15:31.531 fused_ordering(993) 00:15:31.531 fused_ordering(994) 00:15:31.531 fused_ordering(995) 00:15:31.531 fused_ordering(996) 00:15:31.531 fused_ordering(997) 00:15:31.531 fused_ordering(998) 00:15:31.531 fused_ordering(999) 00:15:31.531 fused_ordering(1000) 00:15:31.531 fused_ordering(1001) 00:15:31.531 fused_ordering(1002) 00:15:31.531 fused_ordering(1003) 00:15:31.531 fused_ordering(1004) 00:15:31.531 fused_ordering(1005) 00:15:31.531 fused_ordering(1006) 00:15:31.531 fused_ordering(1007) 00:15:31.531 fused_ordering(1008) 00:15:31.531 fused_ordering(1009) 00:15:31.531 fused_ordering(1010) 00:15:31.531 fused_ordering(1011) 00:15:31.531 fused_ordering(1012) 00:15:31.531 fused_ordering(1013) 00:15:31.531 fused_ordering(1014) 00:15:31.531 fused_ordering(1015) 00:15:31.531 fused_ordering(1016) 00:15:31.531 fused_ordering(1017) 00:15:31.531 fused_ordering(1018) 00:15:31.531 fused_ordering(1019) 00:15:31.531 fused_ordering(1020) 00:15:31.531 fused_ordering(1021) 00:15:31.531 fused_ordering(1022) 00:15:31.531 fused_ordering(1023) 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:31.531 rmmod nvme_tcp 00:15:31.531 rmmod nvme_fabrics 00:15:31.531 rmmod nvme_keyring 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@515 -- # '[' -n 1773527 ']' 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # killprocess 1773527 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 1773527 ']' 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 1773527 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1773527 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1773527' 00:15:31.531 killing process with pid 1773527 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 1773527 00:15:31.531 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 1773527 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-save 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@789 -- # iptables-restore 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.791 13:26:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:34.321 00:15:34.321 real 0m7.363s 00:15:34.321 user 0m4.940s 00:15:34.321 sys 0m3.030s 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:34.321 ************************************ 00:15:34.321 END TEST nvmf_fused_ordering 00:15:34.321 ************************************ 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:34.321 ************************************ 00:15:34.321 START TEST nvmf_ns_masking 00:15:34.321 ************************************ 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:34.321 * Looking for test storage... 00:15:34.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lcov --version 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:34.321 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:34.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.322 --rc genhtml_branch_coverage=1 00:15:34.322 --rc genhtml_function_coverage=1 00:15:34.322 --rc genhtml_legend=1 00:15:34.322 --rc geninfo_all_blocks=1 00:15:34.322 --rc geninfo_unexecuted_blocks=1 00:15:34.322 00:15:34.322 ' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:34.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.322 --rc genhtml_branch_coverage=1 00:15:34.322 --rc genhtml_function_coverage=1 00:15:34.322 --rc genhtml_legend=1 00:15:34.322 --rc geninfo_all_blocks=1 00:15:34.322 --rc geninfo_unexecuted_blocks=1 00:15:34.322 00:15:34.322 ' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:34.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.322 --rc genhtml_branch_coverage=1 00:15:34.322 --rc genhtml_function_coverage=1 00:15:34.322 --rc genhtml_legend=1 00:15:34.322 --rc geninfo_all_blocks=1 00:15:34.322 --rc geninfo_unexecuted_blocks=1 00:15:34.322 00:15:34.322 ' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:34.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:34.322 --rc genhtml_branch_coverage=1 00:15:34.322 --rc genhtml_function_coverage=1 00:15:34.322 --rc genhtml_legend=1 00:15:34.322 --rc geninfo_all_blocks=1 00:15:34.322 --rc geninfo_unexecuted_blocks=1 00:15:34.322 00:15:34.322 ' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:34.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=4d83b141-9959-4ed9-ae33-53ee99a5ecff 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=69d40be6-8103-4f81-9fe2-56080a3d43c6 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:34.322 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3679a81f-3f08-412e-8ed4-5542a9f4ea62 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:34.323 13:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.230 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:15:36.231 Found 0000:09:00.0 (0x8086 - 0x1592) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:15:36.231 Found 0000:09:00.1 (0x8086 - 0x1592) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:36.231 Found net devices under 0000:09:00.0: cvl_0_0 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:36.231 Found net devices under 0000:09:00.1: cvl_0_1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # is_hw=yes 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:36.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:15:36.231 00:15:36.231 --- 10.0.0.2 ping statistics --- 00:15:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.231 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:15:36.231 00:15:36.231 --- 10.0.0.1 ping statistics --- 00:15:36.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.231 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # return 0 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # nvmfpid=1775691 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # waitforlisten 1775691 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1775691 ']' 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.231 13:26:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.231 [2024-10-07 13:26:17.854029] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:36.231 [2024-10-07 13:26:17.854109] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.231 [2024-10-07 13:26:17.916807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.489 [2024-10-07 13:26:18.020630] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.489 [2024-10-07 13:26:18.020696] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.489 [2024-10-07 13:26:18.020725] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.489 [2024-10-07 13:26:18.020737] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.489 [2024-10-07 13:26:18.020746] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.489 [2024-10-07 13:26:18.021292] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.489 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:36.747 [2024-10-07 13:26:18.420563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.747 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:36.747 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:36.747 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:37.006 Malloc1 00:15:37.265 13:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:37.523 Malloc2 00:15:37.523 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:37.781 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:38.039 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:38.298 [2024-10-07 13:26:19.903867] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:38.298 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:38.298 13:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3679a81f-3f08-412e-8ed4-5542a9f4ea62 -a 10.0.0.2 -s 4420 -i 4 00:15:38.559 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:38.559 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:38.559 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:38.559 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:15:38.559 13:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:40.464 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.723 [ 0]:0x1 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5191af6342fc41da928373d738a23ac4 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5191af6342fc41da928373d738a23ac4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.723 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:40.981 [ 0]:0x1 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5191af6342fc41da928373d738a23ac4 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5191af6342fc41da928373d738a23ac4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:40.981 [ 1]:0x2 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:40.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.981 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:41.241 13:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:41.500 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:41.500 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3679a81f-3f08-412e-8ed4-5542a9f4ea62 -a 10.0.0.2 -s 4420 -i 4 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:15:41.761 13:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:43.759 [ 0]:0x2 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:43.759 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.017 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:44.017 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.017 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.275 [ 0]:0x1 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5191af6342fc41da928373d738a23ac4 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5191af6342fc41da928373d738a23ac4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.275 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.275 [ 1]:0x2 00:15:44.276 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.276 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.276 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:44.276 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.276 13:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:44.533 [ 0]:0x2 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:44.533 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:44.792 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.792 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:45.050 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:45.050 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3679a81f-3f08-412e-8ed4-5542a9f4ea62 -a 10.0.0.2 -s 4420 -i 4 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:45.308 13:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.212 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.470 [ 0]:0x1 00:15:47.470 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.470 13:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5191af6342fc41da928373d738a23ac4 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5191af6342fc41da928373d738a23ac4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.470 [ 1]:0x2 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.470 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.728 [ 0]:0x2 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.728 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:47.986 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:48.245 [2024-10-07 13:26:29.737590] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:48.245 request: 00:15:48.245 { 00:15:48.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:48.245 "nsid": 2, 00:15:48.245 "host": "nqn.2016-06.io.spdk:host1", 00:15:48.245 "method": "nvmf_ns_remove_host", 00:15:48.245 "req_id": 1 00:15:48.245 } 00:15:48.245 Got JSON-RPC error response 00:15:48.245 response: 00:15:48.245 { 00:15:48.245 "code": -32602, 00:15:48.245 "message": "Invalid parameters" 00:15:48.245 } 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:48.245 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:48.246 [ 0]:0x2 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f129b68e1d4f4e76b61a70a12709af54 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f129b68e1d4f4e76b61a70a12709af54 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:48.246 13:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1777256 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1777256 /var/tmp/host.sock 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 1777256 ']' 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:48.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:48.504 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:48.504 [2024-10-07 13:26:30.100852] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:48.504 [2024-10-07 13:26:30.100932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1777256 ] 00:15:48.504 [2024-10-07 13:26:30.157050] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:48.763 [2024-10-07 13:26:30.267813] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.022 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:49.022 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:15:49.022 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:49.279 13:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:49.537 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 4d83b141-9959-4ed9-ae33-53ee99a5ecff 00:15:49.537 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:49.537 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 4D83B14199594ED9AE3353EE99A5ECFF -i 00:15:49.795 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 69d40be6-8103-4f81-9fe2-56080a3d43c6 00:15:49.795 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@785 -- # tr -d - 00:15:49.795 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 69D40BE681034F819FE256080A3D43C6 -i 00:15:50.053 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.311 13:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:50.569 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:50.569 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:50.825 nvme0n1 00:15:50.826 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:50.826 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:51.391 nvme1n2 00:15:51.391 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:51.391 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:51.391 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:51.391 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:51.391 13:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:51.649 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:51.649 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:51.649 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:51.649 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:51.906 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 4d83b141-9959-4ed9-ae33-53ee99a5ecff == \4\d\8\3\b\1\4\1\-\9\9\5\9\-\4\e\d\9\-\a\e\3\3\-\5\3\e\e\9\9\a\5\e\c\f\f ]] 00:15:51.906 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:51.906 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:51.906 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:52.164 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 69d40be6-8103-4f81-9fe2-56080a3d43c6 == \6\9\d\4\0\b\e\6\-\8\1\0\3\-\4\f\8\1\-\9\f\e\2\-\5\6\0\8\0\a\3\d\4\3\c\6 ]] 00:15:52.164 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1777256 00:15:52.164 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1777256 ']' 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1777256 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1777256 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1777256' 00:15:52.165 killing process with pid 1777256 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1777256 00:15:52.165 13:26:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1777256 00:15:52.733 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@514 -- # nvmfcleanup 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:52.993 rmmod nvme_tcp 00:15:52.993 rmmod nvme_fabrics 00:15:52.993 rmmod nvme_keyring 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@515 -- # '[' -n 1775691 ']' 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # killprocess 1775691 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 1775691 ']' 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 1775691 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1775691 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1775691' 00:15:52.993 killing process with pid 1775691 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 1775691 00:15:52.993 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 1775691 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-save 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@789 -- # iptables-restore 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.252 13:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.789 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:55.789 00:15:55.789 real 0m21.496s 00:15:55.789 user 0m28.456s 00:15:55.789 sys 0m4.059s 00:15:55.789 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:55.789 13:26:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:55.789 ************************************ 00:15:55.789 END TEST nvmf_ns_masking 00:15:55.789 ************************************ 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.789 ************************************ 00:15:55.789 START TEST nvmf_nvme_cli 00:15:55.789 ************************************ 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:55.789 * Looking for test storage... 00:15:55.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lcov --version 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:55.789 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:55.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.790 --rc genhtml_branch_coverage=1 00:15:55.790 --rc genhtml_function_coverage=1 00:15:55.790 --rc genhtml_legend=1 00:15:55.790 --rc geninfo_all_blocks=1 00:15:55.790 --rc geninfo_unexecuted_blocks=1 00:15:55.790 00:15:55.790 ' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:55.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.790 --rc genhtml_branch_coverage=1 00:15:55.790 --rc genhtml_function_coverage=1 00:15:55.790 --rc genhtml_legend=1 00:15:55.790 --rc geninfo_all_blocks=1 00:15:55.790 --rc geninfo_unexecuted_blocks=1 00:15:55.790 00:15:55.790 ' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:55.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.790 --rc genhtml_branch_coverage=1 00:15:55.790 --rc genhtml_function_coverage=1 00:15:55.790 --rc genhtml_legend=1 00:15:55.790 --rc geninfo_all_blocks=1 00:15:55.790 --rc geninfo_unexecuted_blocks=1 00:15:55.790 00:15:55.790 ' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:55.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.790 --rc genhtml_branch_coverage=1 00:15:55.790 --rc genhtml_function_coverage=1 00:15:55.790 --rc genhtml_legend=1 00:15:55.790 --rc geninfo_all_blocks=1 00:15:55.790 --rc geninfo_unexecuted_blocks=1 00:15:55.790 00:15:55.790 ' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # prepare_net_devs 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@436 -- # local -g is_hw=no 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # remove_spdk_ns 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:55.790 13:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:15:57.692 Found 0000:09:00.0 (0x8086 - 0x1592) 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.692 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:15:57.693 Found 0000:09:00.1 (0x8086 - 0x1592) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:57.693 Found net devices under 0000:09:00.0: cvl_0_0 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ up == up ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:57.693 Found net devices under 0000:09:00.1: cvl_0_1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # is_hw=yes 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:57.693 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.693 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:15:57.693 00:15:57.693 --- 10.0.0.2 ping statistics --- 00:15:57.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.693 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:57.693 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.693 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:15:57.693 00:15:57.693 --- 10.0.0.1 ping statistics --- 00:15:57.693 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.693 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # return 0 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # nvmfpid=1779638 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # waitforlisten 1779638 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 1779638 ']' 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.693 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:57.693 [2024-10-07 13:26:39.395618] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:15:57.693 [2024-10-07 13:26:39.395708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.951 [2024-10-07 13:26:39.456426] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:57.951 [2024-10-07 13:26:39.556945] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.951 [2024-10-07 13:26:39.557005] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.951 [2024-10-07 13:26:39.557033] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.951 [2024-10-07 13:26:39.557045] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.951 [2024-10-07 13:26:39.557055] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.951 [2024-10-07 13:26:39.558727] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.951 [2024-10-07 13:26:39.558791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:15:57.951 [2024-10-07 13:26:39.558816] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:15:57.951 [2024-10-07 13:26:39.558819] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 [2024-10-07 13:26:39.721772] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 Malloc0 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 Malloc1 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.224 [2024-10-07 13:26:39.806849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.224 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.225 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:58.225 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.225 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:58.225 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.225 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -a 10.0.0.2 -s 4420 00:15:58.483 00:15:58.483 Discovery Log Number of Records 2, Generation counter 2 00:15:58.483 =====Discovery Log Entry 0====== 00:15:58.483 trtype: tcp 00:15:58.483 adrfam: ipv4 00:15:58.483 subtype: current discovery subsystem 00:15:58.483 treq: not required 00:15:58.483 portid: 0 00:15:58.483 trsvcid: 4420 00:15:58.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:58.483 traddr: 10.0.0.2 00:15:58.483 eflags: explicit discovery connections, duplicate discovery information 00:15:58.483 sectype: none 00:15:58.483 =====Discovery Log Entry 1====== 00:15:58.483 trtype: tcp 00:15:58.483 adrfam: ipv4 00:15:58.483 subtype: nvme subsystem 00:15:58.483 treq: not required 00:15:58.483 portid: 0 00:15:58.483 trsvcid: 4420 00:15:58.483 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:58.483 traddr: 10.0.0.2 00:15:58.483 eflags: none 00:15:58.483 sectype: none 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:58.483 13:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid=21b7cb46-a602-e411-a339-001e67bc3be4 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:15:59.048 13:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:00.945 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:01.202 /dev/nvme0n2 ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # local dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # nvme list 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ Node == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ --------------------- == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n1 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@551 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # echo /dev/nvme0n2 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # read -r dev _ 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:01.202 13:26:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@514 -- # nvmfcleanup 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:01.461 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:01.461 rmmod nvme_tcp 00:16:01.461 rmmod nvme_fabrics 00:16:01.719 rmmod nvme_keyring 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@515 -- # '[' -n 1779638 ']' 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # killprocess 1779638 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 1779638 ']' 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 1779638 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1779638 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1779638' 00:16:01.719 killing process with pid 1779638 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 1779638 00:16:01.719 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 1779638 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-save 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@789 -- # iptables-restore 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:01.978 13:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.882 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:03.882 00:16:03.882 real 0m8.543s 00:16:03.882 user 0m16.162s 00:16:03.882 sys 0m2.284s 00:16:03.882 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.882 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:03.882 ************************************ 00:16:03.882 END TEST nvmf_nvme_cli 00:16:03.882 ************************************ 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.141 ************************************ 00:16:04.141 START TEST nvmf_vfio_user 00:16:04.141 ************************************ 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:04.141 * Looking for test storage... 00:16:04.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lcov --version 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:04.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.141 --rc genhtml_branch_coverage=1 00:16:04.141 --rc genhtml_function_coverage=1 00:16:04.141 --rc genhtml_legend=1 00:16:04.141 --rc geninfo_all_blocks=1 00:16:04.141 --rc geninfo_unexecuted_blocks=1 00:16:04.141 00:16:04.141 ' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:04.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.141 --rc genhtml_branch_coverage=1 00:16:04.141 --rc genhtml_function_coverage=1 00:16:04.141 --rc genhtml_legend=1 00:16:04.141 --rc geninfo_all_blocks=1 00:16:04.141 --rc geninfo_unexecuted_blocks=1 00:16:04.141 00:16:04.141 ' 00:16:04.141 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:04.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.141 --rc genhtml_branch_coverage=1 00:16:04.141 --rc genhtml_function_coverage=1 00:16:04.141 --rc genhtml_legend=1 00:16:04.141 --rc geninfo_all_blocks=1 00:16:04.141 --rc geninfo_unexecuted_blocks=1 00:16:04.142 00:16:04.142 ' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:04.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:04.142 --rc genhtml_branch_coverage=1 00:16:04.142 --rc genhtml_function_coverage=1 00:16:04.142 --rc genhtml_legend=1 00:16:04.142 --rc geninfo_all_blocks=1 00:16:04.142 --rc geninfo_unexecuted_blocks=1 00:16:04.142 00:16:04.142 ' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:04.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1780536 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1780536' 00:16:04.142 Process pid: 1780536 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1780536 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1780536 ']' 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:04.142 13:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:04.142 [2024-10-07 13:26:45.853011] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:04.142 [2024-10-07 13:26:45.853115] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.400 [2024-10-07 13:26:45.909031] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.400 [2024-10-07 13:26:46.014620] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.400 [2024-10-07 13:26:46.014690] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.400 [2024-10-07 13:26:46.014707] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:04.400 [2024-10-07 13:26:46.014718] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:04.400 [2024-10-07 13:26:46.014728] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.400 [2024-10-07 13:26:46.016180] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.400 [2024-10-07 13:26:46.016244] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.400 [2024-10-07 13:26:46.016311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.400 [2024-10-07 13:26:46.016314] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.657 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.657 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:04.657 13:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:05.589 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:05.846 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:05.846 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:05.846 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:05.846 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:05.846 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:06.104 Malloc1 00:16:06.104 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:06.361 13:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:06.618 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:06.875 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:06.875 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:06.875 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:07.132 Malloc2 00:16:07.132 13:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:07.389 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:07.953 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:07.953 [2024-10-07 13:26:49.652235] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:07.953 [2024-10-07 13:26:49.652277] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1780939 ] 00:16:08.212 [2024-10-07 13:26:49.685189] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:08.212 [2024-10-07 13:26:49.690689] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:08.212 [2024-10-07 13:26:49.690723] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f834d4af000 00:16:08.212 [2024-10-07 13:26:49.691677] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.692664] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.693678] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.694671] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.695685] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.696688] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.697705] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.698694] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:08.212 [2024-10-07 13:26:49.699703] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:08.212 [2024-10-07 13:26:49.699725] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f834d4a4000 00:16:08.212 [2024-10-07 13:26:49.700873] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.212 [2024-10-07 13:26:49.712519] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:08.212 [2024-10-07 13:26:49.712558] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:16:08.212 [2024-10-07 13:26:49.720823] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.212 [2024-10-07 13:26:49.720875] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:08.212 [2024-10-07 13:26:49.720991] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:16:08.212 [2024-10-07 13:26:49.721020] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:16:08.212 [2024-10-07 13:26:49.721032] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:16:08.212 [2024-10-07 13:26:49.721821] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:08.212 [2024-10-07 13:26:49.721842] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:16:08.212 [2024-10-07 13:26:49.721856] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:16:08.212 [2024-10-07 13:26:49.722821] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:08.212 [2024-10-07 13:26:49.722842] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:16:08.212 [2024-10-07 13:26:49.722856] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.723828] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:08.212 [2024-10-07 13:26:49.723848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.724834] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:08.212 [2024-10-07 13:26:49.724855] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:16:08.212 [2024-10-07 13:26:49.724864] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.724876] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.724985] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:16:08.212 [2024-10-07 13:26:49.724994] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.725007] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:08.212 [2024-10-07 13:26:49.725836] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:08.212 [2024-10-07 13:26:49.726837] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:08.212 [2024-10-07 13:26:49.727850] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.212 [2024-10-07 13:26:49.728844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:08.212 [2024-10-07 13:26:49.728939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:08.212 [2024-10-07 13:26:49.729862] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:08.212 [2024-10-07 13:26:49.729882] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:08.212 [2024-10-07 13:26:49.729892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.729917] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:16:08.212 [2024-10-07 13:26:49.729931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.729954] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.212 [2024-10-07 13:26:49.729979] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.212 [2024-10-07 13:26:49.729986] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.212 [2024-10-07 13:26:49.730005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.212 [2024-10-07 13:26:49.730055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:08.212 [2024-10-07 13:26:49.730070] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:16:08.212 [2024-10-07 13:26:49.730079] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:16:08.212 [2024-10-07 13:26:49.730086] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:16:08.212 [2024-10-07 13:26:49.730093] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:08.212 [2024-10-07 13:26:49.730101] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:16:08.212 [2024-10-07 13:26:49.730109] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:16:08.212 [2024-10-07 13:26:49.730116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:08.212 [2024-10-07 13:26:49.730161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:08.212 [2024-10-07 13:26:49.730179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.212 [2024-10-07 13:26:49.730191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.212 [2024-10-07 13:26:49.730203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.212 [2024-10-07 13:26:49.730215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.212 [2024-10-07 13:26:49.730223] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730238] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730253] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:08.212 [2024-10-07 13:26:49.730265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:08.212 [2024-10-07 13:26:49.730275] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:16:08.212 [2024-10-07 13:26:49.730283] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730307] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.212 [2024-10-07 13:26:49.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:08.212 [2024-10-07 13:26:49.730398] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730414] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:16:08.212 [2024-10-07 13:26:49.730426] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:08.213 [2024-10-07 13:26:49.730435] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:08.213 [2024-10-07 13:26:49.730440] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.730450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730486] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:16:08.213 [2024-10-07 13:26:49.730502] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730516] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730532] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.213 [2024-10-07 13:26:49.730541] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.213 [2024-10-07 13:26:49.730547] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.730556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730604] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730631] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:08.213 [2024-10-07 13:26:49.730639] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.213 [2024-10-07 13:26:49.730660] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.730684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730713] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730725] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730775] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:16:08.213 [2024-10-07 13:26:49.730783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:16:08.213 [2024-10-07 13:26:49.730791] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:16:08.213 [2024-10-07 13:26:49.730817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730850] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730878] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.730925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.730948] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:08.213 [2024-10-07 13:26:49.730973] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:08.213 [2024-10-07 13:26:49.730979] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:08.213 [2024-10-07 13:26:49.730985] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:08.213 [2024-10-07 13:26:49.730991] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:08.213 [2024-10-07 13:26:49.731000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:08.213 [2024-10-07 13:26:49.731013] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:08.213 [2024-10-07 13:26:49.731021] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:08.213 [2024-10-07 13:26:49.731027] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.731036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.731047] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:08.213 [2024-10-07 13:26:49.731055] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:08.213 [2024-10-07 13:26:49.731061] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.731070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.731082] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:08.213 [2024-10-07 13:26:49.731090] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:08.213 [2024-10-07 13:26:49.731095] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:08.213 [2024-10-07 13:26:49.731104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:08.213 [2024-10-07 13:26:49.731116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.731136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.731153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:08.213 [2024-10-07 13:26:49.731165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:08.213 ===================================================== 00:16:08.213 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:08.213 ===================================================== 00:16:08.213 Controller Capabilities/Features 00:16:08.213 ================================ 00:16:08.213 Vendor ID: 4e58 00:16:08.213 Subsystem Vendor ID: 4e58 00:16:08.213 Serial Number: SPDK1 00:16:08.213 Model Number: SPDK bdev Controller 00:16:08.213 Firmware Version: 25.01 00:16:08.213 Recommended Arb Burst: 6 00:16:08.213 IEEE OUI Identifier: 8d 6b 50 00:16:08.213 Multi-path I/O 00:16:08.213 May have multiple subsystem ports: Yes 00:16:08.213 May have multiple controllers: Yes 00:16:08.213 Associated with SR-IOV VF: No 00:16:08.213 Max Data Transfer Size: 131072 00:16:08.213 Max Number of Namespaces: 32 00:16:08.213 Max Number of I/O Queues: 127 00:16:08.213 NVMe Specification Version (VS): 1.3 00:16:08.213 NVMe Specification Version (Identify): 1.3 00:16:08.213 Maximum Queue Entries: 256 00:16:08.213 Contiguous Queues Required: Yes 00:16:08.213 Arbitration Mechanisms Supported 00:16:08.213 Weighted Round Robin: Not Supported 00:16:08.213 Vendor Specific: Not Supported 00:16:08.213 Reset Timeout: 15000 ms 00:16:08.213 Doorbell Stride: 4 bytes 00:16:08.213 NVM Subsystem Reset: Not Supported 00:16:08.213 Command Sets Supported 00:16:08.213 NVM Command Set: Supported 00:16:08.213 Boot Partition: Not Supported 00:16:08.213 Memory Page Size Minimum: 4096 bytes 00:16:08.213 Memory Page Size Maximum: 4096 bytes 00:16:08.213 Persistent Memory Region: Not Supported 00:16:08.213 Optional Asynchronous Events Supported 00:16:08.213 Namespace Attribute Notices: Supported 00:16:08.213 Firmware Activation Notices: Not Supported 00:16:08.214 ANA Change Notices: Not Supported 00:16:08.214 PLE Aggregate Log Change Notices: Not Supported 00:16:08.214 LBA Status Info Alert Notices: Not Supported 00:16:08.214 EGE Aggregate Log Change Notices: Not Supported 00:16:08.214 Normal NVM Subsystem Shutdown event: Not Supported 00:16:08.214 Zone Descriptor Change Notices: Not Supported 00:16:08.214 Discovery Log Change Notices: Not Supported 00:16:08.214 Controller Attributes 00:16:08.214 128-bit Host Identifier: Supported 00:16:08.214 Non-Operational Permissive Mode: Not Supported 00:16:08.214 NVM Sets: Not Supported 00:16:08.214 Read Recovery Levels: Not Supported 00:16:08.214 Endurance Groups: Not Supported 00:16:08.214 Predictable Latency Mode: Not Supported 00:16:08.214 Traffic Based Keep ALive: Not Supported 00:16:08.214 Namespace Granularity: Not Supported 00:16:08.214 SQ Associations: Not Supported 00:16:08.214 UUID List: Not Supported 00:16:08.214 Multi-Domain Subsystem: Not Supported 00:16:08.214 Fixed Capacity Management: Not Supported 00:16:08.214 Variable Capacity Management: Not Supported 00:16:08.214 Delete Endurance Group: Not Supported 00:16:08.214 Delete NVM Set: Not Supported 00:16:08.214 Extended LBA Formats Supported: Not Supported 00:16:08.214 Flexible Data Placement Supported: Not Supported 00:16:08.214 00:16:08.214 Controller Memory Buffer Support 00:16:08.214 ================================ 00:16:08.214 Supported: No 00:16:08.214 00:16:08.214 Persistent Memory Region Support 00:16:08.214 ================================ 00:16:08.214 Supported: No 00:16:08.214 00:16:08.214 Admin Command Set Attributes 00:16:08.214 ============================ 00:16:08.214 Security Send/Receive: Not Supported 00:16:08.214 Format NVM: Not Supported 00:16:08.214 Firmware Activate/Download: Not Supported 00:16:08.214 Namespace Management: Not Supported 00:16:08.214 Device Self-Test: Not Supported 00:16:08.214 Directives: Not Supported 00:16:08.214 NVMe-MI: Not Supported 00:16:08.214 Virtualization Management: Not Supported 00:16:08.214 Doorbell Buffer Config: Not Supported 00:16:08.214 Get LBA Status Capability: Not Supported 00:16:08.214 Command & Feature Lockdown Capability: Not Supported 00:16:08.214 Abort Command Limit: 4 00:16:08.214 Async Event Request Limit: 4 00:16:08.214 Number of Firmware Slots: N/A 00:16:08.214 Firmware Slot 1 Read-Only: N/A 00:16:08.214 Firmware Activation Without Reset: N/A 00:16:08.214 Multiple Update Detection Support: N/A 00:16:08.214 Firmware Update Granularity: No Information Provided 00:16:08.214 Per-Namespace SMART Log: No 00:16:08.214 Asymmetric Namespace Access Log Page: Not Supported 00:16:08.214 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:08.214 Command Effects Log Page: Supported 00:16:08.214 Get Log Page Extended Data: Supported 00:16:08.214 Telemetry Log Pages: Not Supported 00:16:08.214 Persistent Event Log Pages: Not Supported 00:16:08.214 Supported Log Pages Log Page: May Support 00:16:08.214 Commands Supported & Effects Log Page: Not Supported 00:16:08.214 Feature Identifiers & Effects Log Page:May Support 00:16:08.214 NVMe-MI Commands & Effects Log Page: May Support 00:16:08.214 Data Area 4 for Telemetry Log: Not Supported 00:16:08.214 Error Log Page Entries Supported: 128 00:16:08.214 Keep Alive: Supported 00:16:08.214 Keep Alive Granularity: 10000 ms 00:16:08.214 00:16:08.214 NVM Command Set Attributes 00:16:08.214 ========================== 00:16:08.214 Submission Queue Entry Size 00:16:08.214 Max: 64 00:16:08.214 Min: 64 00:16:08.214 Completion Queue Entry Size 00:16:08.214 Max: 16 00:16:08.214 Min: 16 00:16:08.214 Number of Namespaces: 32 00:16:08.214 Compare Command: Supported 00:16:08.214 Write Uncorrectable Command: Not Supported 00:16:08.214 Dataset Management Command: Supported 00:16:08.214 Write Zeroes Command: Supported 00:16:08.214 Set Features Save Field: Not Supported 00:16:08.214 Reservations: Not Supported 00:16:08.214 Timestamp: Not Supported 00:16:08.214 Copy: Supported 00:16:08.214 Volatile Write Cache: Present 00:16:08.214 Atomic Write Unit (Normal): 1 00:16:08.214 Atomic Write Unit (PFail): 1 00:16:08.214 Atomic Compare & Write Unit: 1 00:16:08.214 Fused Compare & Write: Supported 00:16:08.214 Scatter-Gather List 00:16:08.214 SGL Command Set: Supported (Dword aligned) 00:16:08.214 SGL Keyed: Not Supported 00:16:08.214 SGL Bit Bucket Descriptor: Not Supported 00:16:08.214 SGL Metadata Pointer: Not Supported 00:16:08.214 Oversized SGL: Not Supported 00:16:08.214 SGL Metadata Address: Not Supported 00:16:08.214 SGL Offset: Not Supported 00:16:08.214 Transport SGL Data Block: Not Supported 00:16:08.214 Replay Protected Memory Block: Not Supported 00:16:08.214 00:16:08.214 Firmware Slot Information 00:16:08.214 ========================= 00:16:08.214 Active slot: 1 00:16:08.214 Slot 1 Firmware Revision: 25.01 00:16:08.214 00:16:08.214 00:16:08.214 Commands Supported and Effects 00:16:08.214 ============================== 00:16:08.214 Admin Commands 00:16:08.214 -------------- 00:16:08.214 Get Log Page (02h): Supported 00:16:08.214 Identify (06h): Supported 00:16:08.214 Abort (08h): Supported 00:16:08.214 Set Features (09h): Supported 00:16:08.214 Get Features (0Ah): Supported 00:16:08.214 Asynchronous Event Request (0Ch): Supported 00:16:08.214 Keep Alive (18h): Supported 00:16:08.214 I/O Commands 00:16:08.214 ------------ 00:16:08.214 Flush (00h): Supported LBA-Change 00:16:08.214 Write (01h): Supported LBA-Change 00:16:08.214 Read (02h): Supported 00:16:08.214 Compare (05h): Supported 00:16:08.214 Write Zeroes (08h): Supported LBA-Change 00:16:08.214 Dataset Management (09h): Supported LBA-Change 00:16:08.214 Copy (19h): Supported LBA-Change 00:16:08.214 00:16:08.214 Error Log 00:16:08.214 ========= 00:16:08.214 00:16:08.214 Arbitration 00:16:08.214 =========== 00:16:08.214 Arbitration Burst: 1 00:16:08.214 00:16:08.214 Power Management 00:16:08.214 ================ 00:16:08.214 Number of Power States: 1 00:16:08.214 Current Power State: Power State #0 00:16:08.214 Power State #0: 00:16:08.214 Max Power: 0.00 W 00:16:08.214 Non-Operational State: Operational 00:16:08.214 Entry Latency: Not Reported 00:16:08.214 Exit Latency: Not Reported 00:16:08.214 Relative Read Throughput: 0 00:16:08.214 Relative Read Latency: 0 00:16:08.214 Relative Write Throughput: 0 00:16:08.214 Relative Write Latency: 0 00:16:08.214 Idle Power: Not Reported 00:16:08.214 Active Power: Not Reported 00:16:08.214 Non-Operational Permissive Mode: Not Supported 00:16:08.214 00:16:08.214 Health Information 00:16:08.214 ================== 00:16:08.214 Critical Warnings: 00:16:08.214 Available Spare Space: OK 00:16:08.214 Temperature: OK 00:16:08.214 Device Reliability: OK 00:16:08.214 Read Only: No 00:16:08.214 Volatile Memory Backup: OK 00:16:08.214 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:08.214 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:08.214 Available Spare: 0% 00:16:08.214 Available Sp[2024-10-07 13:26:49.731282] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:08.214 [2024-10-07 13:26:49.731299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:08.214 [2024-10-07 13:26:49.731339] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:16:08.214 [2024-10-07 13:26:49.731356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.214 [2024-10-07 13:26:49.731370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.214 [2024-10-07 13:26:49.731381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.214 [2024-10-07 13:26:49.731390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.214 [2024-10-07 13:26:49.731874] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:08.214 [2024-10-07 13:26:49.731897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:08.214 [2024-10-07 13:26:49.732869] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:08.214 [2024-10-07 13:26:49.732944] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:16:08.214 [2024-10-07 13:26:49.732959] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:16:08.214 [2024-10-07 13:26:49.733879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:08.214 [2024-10-07 13:26:49.733904] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:16:08.214 [2024-10-07 13:26:49.733986] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:08.214 [2024-10-07 13:26:49.737679] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:08.214 are Threshold: 0% 00:16:08.214 Life Percentage Used: 0% 00:16:08.214 Data Units Read: 0 00:16:08.214 Data Units Written: 0 00:16:08.214 Host Read Commands: 0 00:16:08.214 Host Write Commands: 0 00:16:08.214 Controller Busy Time: 0 minutes 00:16:08.215 Power Cycles: 0 00:16:08.215 Power On Hours: 0 hours 00:16:08.215 Unsafe Shutdowns: 0 00:16:08.215 Unrecoverable Media Errors: 0 00:16:08.215 Lifetime Error Log Entries: 0 00:16:08.215 Warning Temperature Time: 0 minutes 00:16:08.215 Critical Temperature Time: 0 minutes 00:16:08.215 00:16:08.215 Number of Queues 00:16:08.215 ================ 00:16:08.215 Number of I/O Submission Queues: 127 00:16:08.215 Number of I/O Completion Queues: 127 00:16:08.215 00:16:08.215 Active Namespaces 00:16:08.215 ================= 00:16:08.215 Namespace ID:1 00:16:08.215 Error Recovery Timeout: Unlimited 00:16:08.215 Command Set Identifier: NVM (00h) 00:16:08.215 Deallocate: Supported 00:16:08.215 Deallocated/Unwritten Error: Not Supported 00:16:08.215 Deallocated Read Value: Unknown 00:16:08.215 Deallocate in Write Zeroes: Not Supported 00:16:08.215 Deallocated Guard Field: 0xFFFF 00:16:08.215 Flush: Supported 00:16:08.215 Reservation: Supported 00:16:08.215 Namespace Sharing Capabilities: Multiple Controllers 00:16:08.215 Size (in LBAs): 131072 (0GiB) 00:16:08.215 Capacity (in LBAs): 131072 (0GiB) 00:16:08.215 Utilization (in LBAs): 131072 (0GiB) 00:16:08.215 NGUID: 3ED519B02E834CBBB841F19AA3A57A87 00:16:08.215 UUID: 3ed519b0-2e83-4cbb-b841-f19aa3a57a87 00:16:08.215 Thin Provisioning: Not Supported 00:16:08.215 Per-NS Atomic Units: Yes 00:16:08.215 Atomic Boundary Size (Normal): 0 00:16:08.215 Atomic Boundary Size (PFail): 0 00:16:08.215 Atomic Boundary Offset: 0 00:16:08.215 Maximum Single Source Range Length: 65535 00:16:08.215 Maximum Copy Length: 65535 00:16:08.215 Maximum Source Range Count: 1 00:16:08.215 NGUID/EUI64 Never Reused: No 00:16:08.215 Namespace Write Protected: No 00:16:08.215 Number of LBA Formats: 1 00:16:08.215 Current LBA Format: LBA Format #00 00:16:08.215 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:08.215 00:16:08.215 13:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:08.472 [2024-10-07 13:26:49.968555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:13.733 Initializing NVMe Controllers 00:16:13.733 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:13.733 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:13.733 Initialization complete. Launching workers. 00:16:13.733 ======================================================== 00:16:13.733 Latency(us) 00:16:13.733 Device Information : IOPS MiB/s Average min max 00:16:13.733 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32755.80 127.95 3908.99 1186.50 9721.50 00:16:13.733 ======================================================== 00:16:13.733 Total : 32755.80 127.95 3908.99 1186.50 9721.50 00:16:13.733 00:16:13.733 [2024-10-07 13:26:54.995233] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:13.733 13:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:13.733 [2024-10-07 13:26:55.237334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:18.999 Initializing NVMe Controllers 00:16:18.999 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:18.999 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:18.999 Initialization complete. Launching workers. 00:16:18.999 ======================================================== 00:16:18.999 Latency(us) 00:16:18.999 Device Information : IOPS MiB/s Average min max 00:16:18.999 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16018.19 62.57 8000.83 6991.37 15960.67 00:16:18.999 ======================================================== 00:16:18.999 Total : 16018.19 62.57 8000.83 6991.37 15960.67 00:16:18.999 00:16:18.999 [2024-10-07 13:27:00.275353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:18.999 13:27:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:18.999 [2024-10-07 13:27:00.489442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.318 [2024-10-07 13:27:05.568066] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.318 Initializing NVMe Controllers 00:16:24.318 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:24.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:24.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:24.318 Initialization complete. Launching workers. 00:16:24.318 Starting thread on core 2 00:16:24.318 Starting thread on core 3 00:16:24.318 Starting thread on core 1 00:16:24.318 13:27:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:24.318 [2024-10-07 13:27:05.869162] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.598 [2024-10-07 13:27:08.931037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.598 Initializing NVMe Controllers 00:16:27.598 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.598 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.598 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:27.598 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:27.598 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:27.598 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:27.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:27.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:27.598 Initialization complete. Launching workers. 00:16:27.598 Starting thread on core 1 with urgent priority queue 00:16:27.598 Starting thread on core 2 with urgent priority queue 00:16:27.598 Starting thread on core 3 with urgent priority queue 00:16:27.598 Starting thread on core 0 with urgent priority queue 00:16:27.598 SPDK bdev Controller (SPDK1 ) core 0: 4156.33 IO/s 24.06 secs/100000 ios 00:16:27.598 SPDK bdev Controller (SPDK1 ) core 1: 4795.00 IO/s 20.86 secs/100000 ios 00:16:27.598 SPDK bdev Controller (SPDK1 ) core 2: 3808.67 IO/s 26.26 secs/100000 ios 00:16:27.598 SPDK bdev Controller (SPDK1 ) core 3: 4045.00 IO/s 24.72 secs/100000 ios 00:16:27.598 ======================================================== 00:16:27.598 00:16:27.598 13:27:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:27.598 [2024-10-07 13:27:09.227218] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:27.598 Initializing NVMe Controllers 00:16:27.598 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.598 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:27.598 Namespace ID: 1 size: 0GB 00:16:27.598 Initialization complete. 00:16:27.598 INFO: using host memory buffer for IO 00:16:27.598 Hello world! 00:16:27.598 [2024-10-07 13:27:09.261760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:27.598 13:27:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:27.856 [2024-10-07 13:27:09.546161] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.228 Initializing NVMe Controllers 00:16:29.228 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.228 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:29.228 Initialization complete. Launching workers. 00:16:29.228 submit (in ns) avg, min, max = 7477.1, 3507.8, 4015828.9 00:16:29.228 complete (in ns) avg, min, max = 30149.9, 2065.6, 5016122.2 00:16:29.228 00:16:29.228 Submit histogram 00:16:29.228 ================ 00:16:29.228 Range in us Cumulative Count 00:16:29.228 3.484 - 3.508: 0.0157% ( 2) 00:16:29.228 3.508 - 3.532: 0.3771% ( 46) 00:16:29.228 3.532 - 3.556: 1.2571% ( 112) 00:16:29.228 3.556 - 3.579: 4.1640% ( 370) 00:16:29.228 3.579 - 3.603: 8.6581% ( 572) 00:16:29.228 3.603 - 3.627: 15.6034% ( 884) 00:16:29.228 3.627 - 3.650: 23.2322% ( 971) 00:16:29.228 3.650 - 3.674: 29.9104% ( 850) 00:16:29.228 3.674 - 3.698: 36.5965% ( 851) 00:16:29.228 3.698 - 3.721: 43.7854% ( 915) 00:16:29.228 3.721 - 3.745: 48.9158% ( 653) 00:16:29.228 3.745 - 3.769: 53.4805% ( 581) 00:16:29.228 3.769 - 3.793: 57.5424% ( 517) 00:16:29.228 3.793 - 3.816: 61.4551% ( 498) 00:16:29.228 3.816 - 3.840: 66.0591% ( 586) 00:16:29.228 3.840 - 3.864: 70.6317% ( 582) 00:16:29.228 3.864 - 3.887: 74.5679% ( 501) 00:16:29.228 3.887 - 3.911: 78.1820% ( 460) 00:16:29.228 3.911 - 3.935: 81.1204% ( 374) 00:16:29.228 3.935 - 3.959: 83.8231% ( 344) 00:16:29.228 3.959 - 3.982: 85.9994% ( 277) 00:16:29.228 3.982 - 4.006: 87.6650% ( 212) 00:16:29.228 4.006 - 4.030: 89.0006% ( 170) 00:16:29.228 4.030 - 4.053: 90.0613% ( 135) 00:16:29.228 4.053 - 4.077: 91.1298% ( 136) 00:16:29.228 4.077 - 4.101: 92.2062% ( 137) 00:16:29.228 4.101 - 4.124: 93.0625% ( 109) 00:16:29.228 4.124 - 4.148: 93.7382% ( 86) 00:16:29.228 4.148 - 4.172: 94.3903% ( 83) 00:16:29.228 4.172 - 4.196: 94.8460% ( 58) 00:16:29.228 4.196 - 4.219: 95.1917% ( 44) 00:16:29.228 4.219 - 4.243: 95.4903% ( 38) 00:16:29.228 4.243 - 4.267: 95.7338% ( 31) 00:16:29.228 4.267 - 4.290: 95.8831% ( 19) 00:16:29.228 4.290 - 4.314: 95.9931% ( 14) 00:16:29.228 4.314 - 4.338: 96.1738% ( 23) 00:16:29.228 4.338 - 4.361: 96.2838% ( 14) 00:16:29.228 4.361 - 4.385: 96.3859% ( 13) 00:16:29.228 4.385 - 4.409: 96.5352% ( 19) 00:16:29.228 4.409 - 4.433: 96.6609% ( 16) 00:16:29.228 4.433 - 4.456: 96.7159% ( 7) 00:16:29.228 4.456 - 4.480: 96.7630% ( 6) 00:16:29.228 4.480 - 4.504: 96.8180% ( 7) 00:16:29.228 4.504 - 4.527: 96.8573% ( 5) 00:16:29.228 4.527 - 4.551: 96.8887% ( 4) 00:16:29.228 4.551 - 4.575: 96.9359% ( 6) 00:16:29.228 4.599 - 4.622: 96.9437% ( 1) 00:16:29.228 4.622 - 4.646: 96.9516% ( 1) 00:16:29.228 4.646 - 4.670: 96.9595% ( 1) 00:16:29.228 4.741 - 4.764: 96.9673% ( 1) 00:16:29.228 4.764 - 4.788: 96.9752% ( 1) 00:16:29.228 4.788 - 4.812: 96.9909% ( 2) 00:16:29.228 4.812 - 4.836: 97.0145% ( 3) 00:16:29.228 4.836 - 4.859: 97.0616% ( 6) 00:16:29.228 4.859 - 4.883: 97.1009% ( 5) 00:16:29.228 4.883 - 4.907: 97.1402% ( 5) 00:16:29.228 4.907 - 4.930: 97.1873% ( 6) 00:16:29.228 4.930 - 4.954: 97.2344% ( 6) 00:16:29.228 4.954 - 4.978: 97.3130% ( 10) 00:16:29.228 4.978 - 5.001: 97.3680% ( 7) 00:16:29.228 5.001 - 5.025: 97.4309% ( 8) 00:16:29.228 5.025 - 5.049: 97.4780% ( 6) 00:16:29.228 5.049 - 5.073: 97.5094% ( 4) 00:16:29.228 5.073 - 5.096: 97.5409% ( 4) 00:16:29.228 5.096 - 5.120: 97.5644% ( 3) 00:16:29.228 5.144 - 5.167: 97.5723% ( 1) 00:16:29.228 5.167 - 5.191: 97.6116% ( 5) 00:16:29.228 5.191 - 5.215: 97.6351% ( 3) 00:16:29.228 5.215 - 5.239: 97.6508% ( 2) 00:16:29.228 5.239 - 5.262: 97.6744% ( 3) 00:16:29.228 5.262 - 5.286: 97.6980% ( 3) 00:16:29.228 5.286 - 5.310: 97.7451% ( 6) 00:16:29.228 5.310 - 5.333: 97.7608% ( 2) 00:16:29.228 5.333 - 5.357: 97.7687% ( 1) 00:16:29.228 5.357 - 5.381: 97.7923% ( 3) 00:16:29.228 5.381 - 5.404: 97.8080% ( 2) 00:16:29.228 5.428 - 5.452: 97.8158% ( 1) 00:16:29.228 5.476 - 5.499: 97.8316% ( 2) 00:16:29.228 5.570 - 5.594: 97.8394% ( 1) 00:16:29.228 5.594 - 5.618: 97.8551% ( 2) 00:16:29.228 5.665 - 5.689: 97.8630% ( 1) 00:16:29.228 5.713 - 5.736: 97.8708% ( 1) 00:16:29.228 5.807 - 5.831: 97.8865% ( 2) 00:16:29.228 5.855 - 5.879: 97.9101% ( 3) 00:16:29.228 5.973 - 5.997: 97.9258% ( 2) 00:16:29.228 5.997 - 6.021: 97.9337% ( 1) 00:16:29.228 6.068 - 6.116: 97.9415% ( 1) 00:16:29.228 6.163 - 6.210: 97.9494% ( 1) 00:16:29.228 6.210 - 6.258: 97.9573% ( 1) 00:16:29.228 6.305 - 6.353: 97.9730% ( 2) 00:16:29.228 6.400 - 6.447: 97.9808% ( 1) 00:16:29.228 6.447 - 6.495: 97.9965% ( 2) 00:16:29.228 6.637 - 6.684: 98.0044% ( 1) 00:16:29.228 6.684 - 6.732: 98.0123% ( 1) 00:16:29.228 6.874 - 6.921: 98.0201% ( 1) 00:16:29.228 6.969 - 7.016: 98.0358% ( 2) 00:16:29.228 7.396 - 7.443: 98.0437% ( 1) 00:16:29.228 7.633 - 7.680: 98.0515% ( 1) 00:16:29.228 7.964 - 8.012: 98.0594% ( 1) 00:16:29.228 8.059 - 8.107: 98.0673% ( 1) 00:16:29.228 8.107 - 8.154: 98.0751% ( 1) 00:16:29.228 8.154 - 8.201: 98.0908% ( 2) 00:16:29.228 8.201 - 8.249: 98.0987% ( 1) 00:16:29.228 8.344 - 8.391: 98.1065% ( 1) 00:16:29.228 8.391 - 8.439: 98.1223% ( 2) 00:16:29.228 8.439 - 8.486: 98.1301% ( 1) 00:16:29.228 8.581 - 8.628: 98.1380% ( 1) 00:16:29.228 8.628 - 8.676: 98.1458% ( 1) 00:16:29.228 8.676 - 8.723: 98.1615% ( 2) 00:16:29.228 8.723 - 8.770: 98.1772% ( 2) 00:16:29.228 8.865 - 8.913: 98.2008% ( 3) 00:16:29.228 9.007 - 9.055: 98.2244% ( 3) 00:16:29.228 9.055 - 9.102: 98.2480% ( 3) 00:16:29.228 9.150 - 9.197: 98.2558% ( 1) 00:16:29.228 9.197 - 9.244: 98.2637% ( 1) 00:16:29.228 9.244 - 9.292: 98.2715% ( 1) 00:16:29.228 9.292 - 9.339: 98.2794% ( 1) 00:16:29.228 9.387 - 9.434: 98.2951% ( 2) 00:16:29.228 9.481 - 9.529: 98.3108% ( 2) 00:16:29.228 9.576 - 9.624: 98.3187% ( 1) 00:16:29.228 9.671 - 9.719: 98.3265% ( 1) 00:16:29.228 9.766 - 9.813: 98.3344% ( 1) 00:16:29.228 9.861 - 9.908: 98.3422% ( 1) 00:16:29.228 9.956 - 10.003: 98.3580% ( 2) 00:16:29.228 10.003 - 10.050: 98.3658% ( 1) 00:16:29.228 10.098 - 10.145: 98.3737% ( 1) 00:16:29.228 10.240 - 10.287: 98.3815% ( 1) 00:16:29.228 10.335 - 10.382: 98.3894% ( 1) 00:16:29.228 10.477 - 10.524: 98.3972% ( 1) 00:16:29.228 10.572 - 10.619: 98.4051% ( 1) 00:16:29.228 10.619 - 10.667: 98.4129% ( 1) 00:16:29.228 10.761 - 10.809: 98.4208% ( 1) 00:16:29.228 10.809 - 10.856: 98.4365% ( 2) 00:16:29.228 10.904 - 10.951: 98.4444% ( 1) 00:16:29.228 10.951 - 10.999: 98.4601% ( 2) 00:16:29.228 10.999 - 11.046: 98.4679% ( 1) 00:16:29.228 11.093 - 11.141: 98.4837% ( 2) 00:16:29.228 11.188 - 11.236: 98.4994% ( 2) 00:16:29.228 11.236 - 11.283: 98.5151% ( 2) 00:16:29.228 11.283 - 11.330: 98.5229% ( 1) 00:16:29.228 11.425 - 11.473: 98.5308% ( 1) 00:16:29.228 11.520 - 11.567: 98.5387% ( 1) 00:16:29.228 11.615 - 11.662: 98.5465% ( 1) 00:16:29.228 11.662 - 11.710: 98.5544% ( 1) 00:16:29.228 11.757 - 11.804: 98.5701% ( 2) 00:16:29.228 11.804 - 11.852: 98.5858% ( 2) 00:16:29.228 12.231 - 12.326: 98.5937% ( 1) 00:16:29.228 12.421 - 12.516: 98.6015% ( 1) 00:16:29.228 12.516 - 12.610: 98.6094% ( 1) 00:16:29.228 12.610 - 12.705: 98.6251% ( 2) 00:16:29.228 12.705 - 12.800: 98.6329% ( 1) 00:16:29.228 13.084 - 13.179: 98.6408% ( 1) 00:16:29.228 13.369 - 13.464: 98.6565% ( 2) 00:16:29.228 13.464 - 13.559: 98.6644% ( 1) 00:16:29.228 13.559 - 13.653: 98.6879% ( 3) 00:16:29.228 13.748 - 13.843: 98.6958% ( 1) 00:16:29.228 13.843 - 13.938: 98.7194% ( 3) 00:16:29.228 13.938 - 14.033: 98.7272% ( 1) 00:16:29.228 14.033 - 14.127: 98.7351% ( 1) 00:16:29.229 14.127 - 14.222: 98.7586% ( 3) 00:16:29.229 14.222 - 14.317: 98.7744% ( 2) 00:16:29.229 14.317 - 14.412: 98.7822% ( 1) 00:16:29.229 14.412 - 14.507: 98.7901% ( 1) 00:16:29.229 14.886 - 14.981: 98.7979% ( 1) 00:16:29.229 14.981 - 15.076: 98.8058% ( 1) 00:16:29.229 15.265 - 15.360: 98.8136% ( 1) 00:16:29.229 15.834 - 15.929: 98.8215% ( 1) 00:16:29.229 17.067 - 17.161: 98.8294% ( 1) 00:16:29.229 17.161 - 17.256: 98.8451% ( 2) 00:16:29.229 17.256 - 17.351: 98.8608% ( 2) 00:16:29.229 17.351 - 17.446: 98.8922% ( 4) 00:16:29.229 17.446 - 17.541: 98.9472% ( 7) 00:16:29.229 17.541 - 17.636: 98.9943% ( 6) 00:16:29.229 17.636 - 17.730: 99.0258% ( 4) 00:16:29.229 17.730 - 17.825: 99.0808% ( 7) 00:16:29.229 17.825 - 17.920: 99.1043% ( 3) 00:16:29.229 17.920 - 18.015: 99.2222% ( 15) 00:16:29.229 18.015 - 18.110: 99.2615% ( 5) 00:16:29.229 18.110 - 18.204: 99.3165% ( 7) 00:16:29.229 18.204 - 18.299: 99.3793% ( 8) 00:16:29.229 18.299 - 18.394: 99.4029% ( 3) 00:16:29.229 18.394 - 18.489: 99.5207% ( 15) 00:16:29.229 18.489 - 18.584: 99.5679% ( 6) 00:16:29.229 18.584 - 18.679: 99.6386% ( 9) 00:16:29.229 18.679 - 18.773: 99.6543% ( 2) 00:16:29.229 18.773 - 18.868: 99.7093% ( 7) 00:16:29.229 18.868 - 18.963: 99.7486% ( 5) 00:16:29.229 18.963 - 19.058: 99.7722% ( 3) 00:16:29.229 19.058 - 19.153: 99.7800% ( 1) 00:16:29.229 19.153 - 19.247: 99.7879% ( 1) 00:16:29.229 19.247 - 19.342: 99.7957% ( 1) 00:16:29.229 19.342 - 19.437: 99.8114% ( 2) 00:16:29.229 19.627 - 19.721: 99.8193% ( 1) 00:16:29.229 19.721 - 19.816: 99.8272% ( 1) 00:16:29.229 19.911 - 20.006: 99.8350% ( 1) 00:16:29.229 20.006 - 20.101: 99.8429% ( 1) 00:16:29.229 20.196 - 20.290: 99.8507% ( 1) 00:16:29.229 20.954 - 21.049: 99.8586% ( 1) 00:16:29.229 23.230 - 23.324: 99.8664% ( 1) 00:16:29.229 24.841 - 25.031: 99.8743% ( 1) 00:16:29.229 25.600 - 25.790: 99.8821% ( 1) 00:16:29.229 26.738 - 26.927: 99.8900% ( 1) 00:16:29.229 27.686 - 27.876: 99.8979% ( 1) 00:16:29.229 29.013 - 29.203: 99.9057% ( 1) 00:16:29.229 32.806 - 32.996: 99.9136% ( 1) 00:16:29.229 3980.705 - 4004.978: 99.9686% ( 7) 00:16:29.229 4004.978 - 4029.250: 100.0000% ( 4) 00:16:29.229 00:16:29.229 Complete histogram 00:16:29.229 ================== 00:16:29.229 Range in us Cumulative Count 00:16:29.229 2.062 - 2.074: 2.9463% ( 375) 00:16:29.229 2.074 - 2.086: 33.5324% ( 3893) 00:16:29.229 2.086 - 2.098: 40.5798% ( 897) 00:16:29.229 2.098 - 2.110: 42.8661% ( 291) 00:16:29.229 2.110 - 2.121: 48.3972% ( 704) 00:16:29.229 2.121 - 2.133: 50.5421% ( 273) 00:16:29.229 2.133 - 2.145: 57.0475% ( 828) 00:16:29.229 2.145 - 2.157: 69.3589% ( 1567) 00:16:29.229 2.157 - 2.169: 70.7888% ( 182) 00:16:29.229 2.169 - 2.181: 72.9180% ( 271) 00:16:29.229 2.181 - 2.193: 75.7700% ( 363) 00:16:29.229 2.193 - 2.204: 76.8777% ( 141) 00:16:29.229 2.204 - 2.216: 79.3997% ( 321) 00:16:29.229 2.216 - 2.228: 85.9601% ( 835) 00:16:29.229 2.228 - 2.240: 88.5999% ( 336) 00:16:29.229 2.240 - 2.252: 90.2577% ( 211) 00:16:29.229 2.252 - 2.264: 91.7348% ( 188) 00:16:29.229 2.264 - 2.276: 92.3397% ( 77) 00:16:29.229 2.276 - 2.287: 92.9290% ( 75) 00:16:29.229 2.287 - 2.299: 93.4789% ( 70) 00:16:29.229 2.299 - 2.311: 94.5553% ( 137) 00:16:29.229 2.311 - 2.323: 95.1838% ( 80) 00:16:29.229 2.323 - 2.335: 95.2703% ( 11) 00:16:29.229 2.335 - 2.347: 95.3096% ( 5) 00:16:29.229 2.347 - 2.359: 95.3646% ( 7) 00:16:29.229 2.359 - 2.370: 95.5217% ( 20) 00:16:29.229 2.370 - 2.382: 95.8202% ( 38) 00:16:29.229 2.382 - 2.394: 96.2131% ( 50) 00:16:29.229 2.394 - 2.406: 96.5666% ( 45) 00:16:29.229 2.406 - 2.418: 96.7709% ( 26) 00:16:29.229 2.418 - 2.430: 96.9909% ( 28) 00:16:29.229 2.430 - 2.441: 97.2187% ( 29) 00:16:29.229 2.441 - 2.453: 97.3287% ( 14) 00:16:29.229 2.453 - 2.465: 97.4859% ( 20) 00:16:29.229 2.465 - 2.477: 97.6980% ( 27) 00:16:29.229 2.477 - 2.489: 97.8158% ( 15) 00:16:29.229 2.489 - 2.501: 97.9573% ( 18) 00:16:29.229 2.501 - 2.513: 98.0751% ( 15) 00:16:29.229 2.513 - 2.524: 98.1223% ( 6) 00:16:29.229 2.524 - 2.536: 98.2008% ( 10) 00:16:29.229 2.536 - 2.548: 98.2558% ( 7) 00:16:29.229 2.548 - 2.560: 98.3265% ( 9) 00:16:29.229 2.560 - 2.572: 98.3658% ( 5) 00:16:29.229 2.572 - 2.584: 98.4051% ( 5) 00:16:29.229 2.584 - 2.596: 9[2024-10-07 13:27:10.568423] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.229 8.4287% ( 3) 00:16:29.229 2.596 - 2.607: 98.4444% ( 2) 00:16:29.229 2.619 - 2.631: 98.4601% ( 2) 00:16:29.229 2.643 - 2.655: 98.4758% ( 2) 00:16:29.229 2.667 - 2.679: 98.4837% ( 1) 00:16:29.229 2.690 - 2.702: 98.4994% ( 2) 00:16:29.229 2.714 - 2.726: 98.5072% ( 1) 00:16:29.229 2.726 - 2.738: 98.5151% ( 1) 00:16:29.229 2.738 - 2.750: 98.5229% ( 1) 00:16:29.229 2.750 - 2.761: 98.5308% ( 1) 00:16:29.229 2.773 - 2.785: 98.5544% ( 3) 00:16:29.229 2.844 - 2.856: 98.5701% ( 2) 00:16:29.229 2.880 - 2.892: 98.5779% ( 1) 00:16:29.229 2.939 - 2.951: 98.5858% ( 1) 00:16:29.229 3.153 - 3.176: 98.5937% ( 1) 00:16:29.229 3.342 - 3.366: 98.6015% ( 1) 00:16:29.229 3.484 - 3.508: 98.6094% ( 1) 00:16:29.229 3.508 - 3.532: 98.6251% ( 2) 00:16:29.229 3.532 - 3.556: 98.6408% ( 2) 00:16:29.229 3.579 - 3.603: 98.6565% ( 2) 00:16:29.229 3.627 - 3.650: 98.6644% ( 1) 00:16:29.229 3.674 - 3.698: 98.6801% ( 2) 00:16:29.229 3.769 - 3.793: 98.6879% ( 1) 00:16:29.229 3.793 - 3.816: 98.6958% ( 1) 00:16:29.229 3.816 - 3.840: 98.7036% ( 1) 00:16:29.229 3.840 - 3.864: 98.7272% ( 3) 00:16:29.229 3.935 - 3.959: 98.7351% ( 1) 00:16:29.229 4.006 - 4.030: 98.7429% ( 1) 00:16:29.229 4.053 - 4.077: 98.7508% ( 1) 00:16:29.229 4.077 - 4.101: 98.7586% ( 1) 00:16:29.229 4.504 - 4.527: 98.7665% ( 1) 00:16:29.229 5.760 - 5.784: 98.7744% ( 1) 00:16:29.229 6.447 - 6.495: 98.7822% ( 1) 00:16:29.229 6.590 - 6.637: 98.7901% ( 1) 00:16:29.229 6.637 - 6.684: 98.7979% ( 1) 00:16:29.229 7.538 - 7.585: 98.8058% ( 1) 00:16:29.229 8.059 - 8.107: 98.8136% ( 1) 00:16:29.229 8.154 - 8.201: 98.8215% ( 1) 00:16:29.229 8.581 - 8.628: 98.8294% ( 1) 00:16:29.229 10.193 - 10.240: 98.8372% ( 1) 00:16:29.229 10.999 - 11.046: 98.8451% ( 1) 00:16:29.229 15.455 - 15.550: 98.8529% ( 1) 00:16:29.229 15.550 - 15.644: 98.8765% ( 3) 00:16:29.229 15.644 - 15.739: 98.9001% ( 3) 00:16:29.229 15.739 - 15.834: 98.9158% ( 2) 00:16:29.229 15.929 - 16.024: 98.9315% ( 2) 00:16:29.229 16.024 - 16.119: 98.9393% ( 1) 00:16:29.229 16.119 - 16.213: 98.9551% ( 2) 00:16:29.229 16.213 - 16.308: 98.9943% ( 5) 00:16:29.229 16.308 - 16.403: 99.0258% ( 4) 00:16:29.229 16.403 - 16.498: 99.0415% ( 2) 00:16:29.229 16.498 - 16.593: 99.0729% ( 4) 00:16:29.229 16.593 - 16.687: 99.1043% ( 4) 00:16:29.229 16.687 - 16.782: 99.1829% ( 10) 00:16:29.229 16.782 - 16.877: 99.2300% ( 6) 00:16:29.229 16.877 - 16.972: 99.2536% ( 3) 00:16:29.229 17.161 - 17.256: 99.2615% ( 1) 00:16:29.229 17.351 - 17.446: 99.2850% ( 3) 00:16:29.229 17.446 - 17.541: 99.2929% ( 1) 00:16:29.229 18.584 - 18.679: 99.3008% ( 1) 00:16:29.229 3021.938 - 3034.074: 99.3086% ( 1) 00:16:29.229 3034.074 - 3046.210: 99.3165% ( 1) 00:16:29.229 3155.437 - 3179.710: 99.3243% ( 1) 00:16:29.229 3980.705 - 4004.978: 99.7643% ( 56) 00:16:29.229 4004.978 - 4029.250: 99.9843% ( 28) 00:16:29.229 5000.154 - 5024.427: 100.0000% ( 2) 00:16:29.229 00:16:29.229 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:29.229 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:29.229 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:29.229 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:29.229 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:29.229 [ 00:16:29.229 { 00:16:29.229 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:29.229 "subtype": "Discovery", 00:16:29.229 "listen_addresses": [], 00:16:29.229 "allow_any_host": true, 00:16:29.229 "hosts": [] 00:16:29.229 }, 00:16:29.229 { 00:16:29.229 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:29.229 "subtype": "NVMe", 00:16:29.229 "listen_addresses": [ 00:16:29.229 { 00:16:29.229 "trtype": "VFIOUSER", 00:16:29.229 "adrfam": "IPv4", 00:16:29.229 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:29.229 "trsvcid": "0" 00:16:29.229 } 00:16:29.229 ], 00:16:29.229 "allow_any_host": true, 00:16:29.229 "hosts": [], 00:16:29.229 "serial_number": "SPDK1", 00:16:29.229 "model_number": "SPDK bdev Controller", 00:16:29.229 "max_namespaces": 32, 00:16:29.229 "min_cntlid": 1, 00:16:29.229 "max_cntlid": 65519, 00:16:29.229 "namespaces": [ 00:16:29.229 { 00:16:29.229 "nsid": 1, 00:16:29.229 "bdev_name": "Malloc1", 00:16:29.229 "name": "Malloc1", 00:16:29.229 "nguid": "3ED519B02E834CBBB841F19AA3A57A87", 00:16:29.229 "uuid": "3ed519b0-2e83-4cbb-b841-f19aa3a57a87" 00:16:29.229 } 00:16:29.229 ] 00:16:29.230 }, 00:16:29.230 { 00:16:29.230 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:29.230 "subtype": "NVMe", 00:16:29.230 "listen_addresses": [ 00:16:29.230 { 00:16:29.230 "trtype": "VFIOUSER", 00:16:29.230 "adrfam": "IPv4", 00:16:29.230 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:29.230 "trsvcid": "0" 00:16:29.230 } 00:16:29.230 ], 00:16:29.230 "allow_any_host": true, 00:16:29.230 "hosts": [], 00:16:29.230 "serial_number": "SPDK2", 00:16:29.230 "model_number": "SPDK bdev Controller", 00:16:29.230 "max_namespaces": 32, 00:16:29.230 "min_cntlid": 1, 00:16:29.230 "max_cntlid": 65519, 00:16:29.230 "namespaces": [ 00:16:29.230 { 00:16:29.230 "nsid": 1, 00:16:29.230 "bdev_name": "Malloc2", 00:16:29.230 "name": "Malloc2", 00:16:29.230 "nguid": "A643E70A6FDD482AABF9A401A2FC2BFF", 00:16:29.230 "uuid": "a643e70a-6fdd-482a-abf9-a401a2fc2bff" 00:16:29.230 } 00:16:29.230 ] 00:16:29.230 } 00:16:29.230 ] 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1783968 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:29.230 13:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:29.487 [2024-10-07 13:27:11.085983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.745 Malloc3 00:16:29.745 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:30.002 [2024-10-07 13:27:11.494957] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:30.003 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:30.003 Asynchronous Event Request test 00:16:30.003 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.003 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:30.003 Registering asynchronous event callbacks... 00:16:30.003 Starting namespace attribute notice tests for all controllers... 00:16:30.003 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:30.003 aer_cb - Changed Namespace 00:16:30.003 Cleaning up... 00:16:30.261 [ 00:16:30.261 { 00:16:30.261 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:30.261 "subtype": "Discovery", 00:16:30.261 "listen_addresses": [], 00:16:30.261 "allow_any_host": true, 00:16:30.261 "hosts": [] 00:16:30.261 }, 00:16:30.261 { 00:16:30.261 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:30.261 "subtype": "NVMe", 00:16:30.261 "listen_addresses": [ 00:16:30.261 { 00:16:30.261 "trtype": "VFIOUSER", 00:16:30.261 "adrfam": "IPv4", 00:16:30.261 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:30.261 "trsvcid": "0" 00:16:30.261 } 00:16:30.261 ], 00:16:30.261 "allow_any_host": true, 00:16:30.261 "hosts": [], 00:16:30.261 "serial_number": "SPDK1", 00:16:30.261 "model_number": "SPDK bdev Controller", 00:16:30.261 "max_namespaces": 32, 00:16:30.261 "min_cntlid": 1, 00:16:30.261 "max_cntlid": 65519, 00:16:30.261 "namespaces": [ 00:16:30.261 { 00:16:30.261 "nsid": 1, 00:16:30.261 "bdev_name": "Malloc1", 00:16:30.261 "name": "Malloc1", 00:16:30.261 "nguid": "3ED519B02E834CBBB841F19AA3A57A87", 00:16:30.261 "uuid": "3ed519b0-2e83-4cbb-b841-f19aa3a57a87" 00:16:30.261 }, 00:16:30.261 { 00:16:30.261 "nsid": 2, 00:16:30.261 "bdev_name": "Malloc3", 00:16:30.261 "name": "Malloc3", 00:16:30.261 "nguid": "E231E5CF4271470FB4F4403B03755AC1", 00:16:30.261 "uuid": "e231e5cf-4271-470f-b4f4-403b03755ac1" 00:16:30.261 } 00:16:30.261 ] 00:16:30.261 }, 00:16:30.261 { 00:16:30.261 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:30.261 "subtype": "NVMe", 00:16:30.261 "listen_addresses": [ 00:16:30.261 { 00:16:30.261 "trtype": "VFIOUSER", 00:16:30.261 "adrfam": "IPv4", 00:16:30.261 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:30.261 "trsvcid": "0" 00:16:30.261 } 00:16:30.261 ], 00:16:30.261 "allow_any_host": true, 00:16:30.261 "hosts": [], 00:16:30.261 "serial_number": "SPDK2", 00:16:30.261 "model_number": "SPDK bdev Controller", 00:16:30.261 "max_namespaces": 32, 00:16:30.261 "min_cntlid": 1, 00:16:30.261 "max_cntlid": 65519, 00:16:30.261 "namespaces": [ 00:16:30.261 { 00:16:30.261 "nsid": 1, 00:16:30.261 "bdev_name": "Malloc2", 00:16:30.261 "name": "Malloc2", 00:16:30.261 "nguid": "A643E70A6FDD482AABF9A401A2FC2BFF", 00:16:30.261 "uuid": "a643e70a-6fdd-482a-abf9-a401a2fc2bff" 00:16:30.261 } 00:16:30.261 ] 00:16:30.261 } 00:16:30.261 ] 00:16:30.261 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1783968 00:16:30.261 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:30.261 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:30.261 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:30.262 13:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:30.262 [2024-10-07 13:27:11.798528] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:30.262 [2024-10-07 13:27:11.798571] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1784096 ] 00:16:30.262 [2024-10-07 13:27:11.832959] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:30.262 [2024-10-07 13:27:11.842708] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:30.262 [2024-10-07 13:27:11.842743] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f35f1093000 00:16:30.262 [2024-10-07 13:27:11.843695] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.844707] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.845710] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.846712] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.847739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.848732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.849739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.850744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:30.262 [2024-10-07 13:27:11.851758] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:30.262 [2024-10-07 13:27:11.851781] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f35f1088000 00:16:30.262 [2024-10-07 13:27:11.852923] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:30.262 [2024-10-07 13:27:11.868379] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:30.262 [2024-10-07 13:27:11.868424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:16:30.262 [2024-10-07 13:27:11.873533] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:30.262 [2024-10-07 13:27:11.873592] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:30.262 [2024-10-07 13:27:11.873702] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:16:30.262 [2024-10-07 13:27:11.873730] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:16:30.262 [2024-10-07 13:27:11.873741] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:16:30.262 [2024-10-07 13:27:11.874536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:30.262 [2024-10-07 13:27:11.874556] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:16:30.262 [2024-10-07 13:27:11.874569] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:16:30.262 [2024-10-07 13:27:11.875542] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:30.262 [2024-10-07 13:27:11.875562] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:16:30.262 [2024-10-07 13:27:11.875576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.876555] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:30.262 [2024-10-07 13:27:11.876576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.877561] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:30.262 [2024-10-07 13:27:11.877581] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:16:30.262 [2024-10-07 13:27:11.877591] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.877602] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.877712] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:16:30.262 [2024-10-07 13:27:11.877723] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.877731] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:30.262 [2024-10-07 13:27:11.878569] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:30.262 [2024-10-07 13:27:11.879572] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:30.262 [2024-10-07 13:27:11.880579] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:30.262 [2024-10-07 13:27:11.881577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:30.262 [2024-10-07 13:27:11.881656] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:30.262 [2024-10-07 13:27:11.882589] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:30.262 [2024-10-07 13:27:11.882612] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:30.262 [2024-10-07 13:27:11.882623] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:16:30.262 [2024-10-07 13:27:11.882646] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:16:30.262 [2024-10-07 13:27:11.882681] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:16:30.262 [2024-10-07 13:27:11.882705] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.262 [2024-10-07 13:27:11.882716] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.262 [2024-10-07 13:27:11.882722] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.262 [2024-10-07 13:27:11.882741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.262 [2024-10-07 13:27:11.890682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:30.262 [2024-10-07 13:27:11.890706] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:16:30.262 [2024-10-07 13:27:11.890715] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:16:30.262 [2024-10-07 13:27:11.890723] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:16:30.262 [2024-10-07 13:27:11.890730] nvme_ctrlr.c:2095:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:30.262 [2024-10-07 13:27:11.890739] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:16:30.262 [2024-10-07 13:27:11.890746] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:16:30.262 [2024-10-07 13:27:11.890755] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.890767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.890782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:30.279 [2024-10-07 13:27:11.898680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:30.279 [2024-10-07 13:27:11.898705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.279 [2024-10-07 13:27:11.898719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.279 [2024-10-07 13:27:11.898732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.279 [2024-10-07 13:27:11.898744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:30.279 [2024-10-07 13:27:11.898754] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.898771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.898790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:30.279 [2024-10-07 13:27:11.906691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:30.279 [2024-10-07 13:27:11.906709] nvme_ctrlr.c:3034:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:16:30.279 [2024-10-07 13:27:11.906718] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.906729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.906743] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.906759] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:30.279 [2024-10-07 13:27:11.914677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:30.279 [2024-10-07 13:27:11.914751] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.914768] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:16:30.279 [2024-10-07 13:27:11.914782] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:30.279 [2024-10-07 13:27:11.914790] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:30.279 [2024-10-07 13:27:11.914796] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.279 [2024-10-07 13:27:11.914807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.922676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.922698] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:16:30.280 [2024-10-07 13:27:11.922719] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.922734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.922748] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.280 [2024-10-07 13:27:11.922756] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.280 [2024-10-07 13:27:11.922763] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.280 [2024-10-07 13:27:11.922772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.930677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.930705] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.930722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.930736] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:30.280 [2024-10-07 13:27:11.930748] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.280 [2024-10-07 13:27:11.930755] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.280 [2024-10-07 13:27:11.930765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.938677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.938699] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938711] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938739] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938747] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938764] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:16:30.280 [2024-10-07 13:27:11.938771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:16:30.280 [2024-10-07 13:27:11.938779] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:16:30.280 [2024-10-07 13:27:11.938803] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.946681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.946709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.954679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.954704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.962681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.962707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.970678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:30.280 [2024-10-07 13:27:11.970711] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:30.280 [2024-10-07 13:27:11.970723] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:30.280 [2024-10-07 13:27:11.970730] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:30.280 [2024-10-07 13:27:11.970736] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:30.280 [2024-10-07 13:27:11.970742] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:30.280 [2024-10-07 13:27:11.970752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:30.280 [2024-10-07 13:27:11.970769] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:30.280 [2024-10-07 13:27:11.970778] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:30.280 [2024-10-07 13:27:11.970784] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.280 [2024-10-07 13:27:11.970794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.970806] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:30.280 [2024-10-07 13:27:11.970814] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:30.280 [2024-10-07 13:27:11.970821] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.280 [2024-10-07 13:27:11.970830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:30.280 [2024-10-07 13:27:11.970842] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:30.280 [2024-10-07 13:27:11.970851] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:30.280 [2024-10-07 13:27:11.970857] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:30.280 [2024-10-07 13:27:11.970866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:30.539 [2024-10-07 13:27:11.978698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:30.539 [2024-10-07 13:27:11.978726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:30.539 [2024-10-07 13:27:11.978761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:30.539 [2024-10-07 13:27:11.978775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:30.539 ===================================================== 00:16:30.539 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:30.539 ===================================================== 00:16:30.539 Controller Capabilities/Features 00:16:30.539 ================================ 00:16:30.539 Vendor ID: 4e58 00:16:30.539 Subsystem Vendor ID: 4e58 00:16:30.539 Serial Number: SPDK2 00:16:30.539 Model Number: SPDK bdev Controller 00:16:30.539 Firmware Version: 25.01 00:16:30.539 Recommended Arb Burst: 6 00:16:30.539 IEEE OUI Identifier: 8d 6b 50 00:16:30.539 Multi-path I/O 00:16:30.539 May have multiple subsystem ports: Yes 00:16:30.539 May have multiple controllers: Yes 00:16:30.539 Associated with SR-IOV VF: No 00:16:30.539 Max Data Transfer Size: 131072 00:16:30.539 Max Number of Namespaces: 32 00:16:30.539 Max Number of I/O Queues: 127 00:16:30.539 NVMe Specification Version (VS): 1.3 00:16:30.539 NVMe Specification Version (Identify): 1.3 00:16:30.539 Maximum Queue Entries: 256 00:16:30.539 Contiguous Queues Required: Yes 00:16:30.539 Arbitration Mechanisms Supported 00:16:30.539 Weighted Round Robin: Not Supported 00:16:30.539 Vendor Specific: Not Supported 00:16:30.539 Reset Timeout: 15000 ms 00:16:30.539 Doorbell Stride: 4 bytes 00:16:30.539 NVM Subsystem Reset: Not Supported 00:16:30.539 Command Sets Supported 00:16:30.539 NVM Command Set: Supported 00:16:30.539 Boot Partition: Not Supported 00:16:30.539 Memory Page Size Minimum: 4096 bytes 00:16:30.539 Memory Page Size Maximum: 4096 bytes 00:16:30.539 Persistent Memory Region: Not Supported 00:16:30.539 Optional Asynchronous Events Supported 00:16:30.539 Namespace Attribute Notices: Supported 00:16:30.539 Firmware Activation Notices: Not Supported 00:16:30.539 ANA Change Notices: Not Supported 00:16:30.539 PLE Aggregate Log Change Notices: Not Supported 00:16:30.539 LBA Status Info Alert Notices: Not Supported 00:16:30.539 EGE Aggregate Log Change Notices: Not Supported 00:16:30.539 Normal NVM Subsystem Shutdown event: Not Supported 00:16:30.539 Zone Descriptor Change Notices: Not Supported 00:16:30.539 Discovery Log Change Notices: Not Supported 00:16:30.539 Controller Attributes 00:16:30.539 128-bit Host Identifier: Supported 00:16:30.539 Non-Operational Permissive Mode: Not Supported 00:16:30.539 NVM Sets: Not Supported 00:16:30.539 Read Recovery Levels: Not Supported 00:16:30.539 Endurance Groups: Not Supported 00:16:30.539 Predictable Latency Mode: Not Supported 00:16:30.539 Traffic Based Keep ALive: Not Supported 00:16:30.539 Namespace Granularity: Not Supported 00:16:30.539 SQ Associations: Not Supported 00:16:30.539 UUID List: Not Supported 00:16:30.539 Multi-Domain Subsystem: Not Supported 00:16:30.539 Fixed Capacity Management: Not Supported 00:16:30.539 Variable Capacity Management: Not Supported 00:16:30.539 Delete Endurance Group: Not Supported 00:16:30.539 Delete NVM Set: Not Supported 00:16:30.539 Extended LBA Formats Supported: Not Supported 00:16:30.539 Flexible Data Placement Supported: Not Supported 00:16:30.539 00:16:30.539 Controller Memory Buffer Support 00:16:30.539 ================================ 00:16:30.539 Supported: No 00:16:30.539 00:16:30.539 Persistent Memory Region Support 00:16:30.539 ================================ 00:16:30.539 Supported: No 00:16:30.539 00:16:30.539 Admin Command Set Attributes 00:16:30.539 ============================ 00:16:30.539 Security Send/Receive: Not Supported 00:16:30.539 Format NVM: Not Supported 00:16:30.539 Firmware Activate/Download: Not Supported 00:16:30.539 Namespace Management: Not Supported 00:16:30.539 Device Self-Test: Not Supported 00:16:30.539 Directives: Not Supported 00:16:30.539 NVMe-MI: Not Supported 00:16:30.539 Virtualization Management: Not Supported 00:16:30.539 Doorbell Buffer Config: Not Supported 00:16:30.539 Get LBA Status Capability: Not Supported 00:16:30.539 Command & Feature Lockdown Capability: Not Supported 00:16:30.539 Abort Command Limit: 4 00:16:30.539 Async Event Request Limit: 4 00:16:30.539 Number of Firmware Slots: N/A 00:16:30.539 Firmware Slot 1 Read-Only: N/A 00:16:30.539 Firmware Activation Without Reset: N/A 00:16:30.539 Multiple Update Detection Support: N/A 00:16:30.539 Firmware Update Granularity: No Information Provided 00:16:30.539 Per-Namespace SMART Log: No 00:16:30.539 Asymmetric Namespace Access Log Page: Not Supported 00:16:30.539 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:30.539 Command Effects Log Page: Supported 00:16:30.539 Get Log Page Extended Data: Supported 00:16:30.539 Telemetry Log Pages: Not Supported 00:16:30.539 Persistent Event Log Pages: Not Supported 00:16:30.539 Supported Log Pages Log Page: May Support 00:16:30.539 Commands Supported & Effects Log Page: Not Supported 00:16:30.539 Feature Identifiers & Effects Log Page:May Support 00:16:30.539 NVMe-MI Commands & Effects Log Page: May Support 00:16:30.539 Data Area 4 for Telemetry Log: Not Supported 00:16:30.539 Error Log Page Entries Supported: 128 00:16:30.540 Keep Alive: Supported 00:16:30.540 Keep Alive Granularity: 10000 ms 00:16:30.540 00:16:30.540 NVM Command Set Attributes 00:16:30.540 ========================== 00:16:30.540 Submission Queue Entry Size 00:16:30.540 Max: 64 00:16:30.540 Min: 64 00:16:30.540 Completion Queue Entry Size 00:16:30.540 Max: 16 00:16:30.540 Min: 16 00:16:30.540 Number of Namespaces: 32 00:16:30.540 Compare Command: Supported 00:16:30.540 Write Uncorrectable Command: Not Supported 00:16:30.540 Dataset Management Command: Supported 00:16:30.540 Write Zeroes Command: Supported 00:16:30.540 Set Features Save Field: Not Supported 00:16:30.540 Reservations: Not Supported 00:16:30.540 Timestamp: Not Supported 00:16:30.540 Copy: Supported 00:16:30.540 Volatile Write Cache: Present 00:16:30.540 Atomic Write Unit (Normal): 1 00:16:30.540 Atomic Write Unit (PFail): 1 00:16:30.540 Atomic Compare & Write Unit: 1 00:16:30.540 Fused Compare & Write: Supported 00:16:30.540 Scatter-Gather List 00:16:30.540 SGL Command Set: Supported (Dword aligned) 00:16:30.540 SGL Keyed: Not Supported 00:16:30.540 SGL Bit Bucket Descriptor: Not Supported 00:16:30.540 SGL Metadata Pointer: Not Supported 00:16:30.540 Oversized SGL: Not Supported 00:16:30.540 SGL Metadata Address: Not Supported 00:16:30.540 SGL Offset: Not Supported 00:16:30.540 Transport SGL Data Block: Not Supported 00:16:30.540 Replay Protected Memory Block: Not Supported 00:16:30.540 00:16:30.540 Firmware Slot Information 00:16:30.540 ========================= 00:16:30.540 Active slot: 1 00:16:30.540 Slot 1 Firmware Revision: 25.01 00:16:30.540 00:16:30.540 00:16:30.540 Commands Supported and Effects 00:16:30.540 ============================== 00:16:30.540 Admin Commands 00:16:30.540 -------------- 00:16:30.540 Get Log Page (02h): Supported 00:16:30.540 Identify (06h): Supported 00:16:30.540 Abort (08h): Supported 00:16:30.540 Set Features (09h): Supported 00:16:30.540 Get Features (0Ah): Supported 00:16:30.540 Asynchronous Event Request (0Ch): Supported 00:16:30.540 Keep Alive (18h): Supported 00:16:30.540 I/O Commands 00:16:30.540 ------------ 00:16:30.540 Flush (00h): Supported LBA-Change 00:16:30.540 Write (01h): Supported LBA-Change 00:16:30.540 Read (02h): Supported 00:16:30.540 Compare (05h): Supported 00:16:30.540 Write Zeroes (08h): Supported LBA-Change 00:16:30.540 Dataset Management (09h): Supported LBA-Change 00:16:30.540 Copy (19h): Supported LBA-Change 00:16:30.540 00:16:30.540 Error Log 00:16:30.540 ========= 00:16:30.540 00:16:30.540 Arbitration 00:16:30.540 =========== 00:16:30.540 Arbitration Burst: 1 00:16:30.540 00:16:30.540 Power Management 00:16:30.540 ================ 00:16:30.540 Number of Power States: 1 00:16:30.540 Current Power State: Power State #0 00:16:30.540 Power State #0: 00:16:30.540 Max Power: 0.00 W 00:16:30.540 Non-Operational State: Operational 00:16:30.540 Entry Latency: Not Reported 00:16:30.540 Exit Latency: Not Reported 00:16:30.540 Relative Read Throughput: 0 00:16:30.540 Relative Read Latency: 0 00:16:30.540 Relative Write Throughput: 0 00:16:30.540 Relative Write Latency: 0 00:16:30.540 Idle Power: Not Reported 00:16:30.540 Active Power: Not Reported 00:16:30.540 Non-Operational Permissive Mode: Not Supported 00:16:30.540 00:16:30.540 Health Information 00:16:30.540 ================== 00:16:30.540 Critical Warnings: 00:16:30.540 Available Spare Space: OK 00:16:30.540 Temperature: OK 00:16:30.540 Device Reliability: OK 00:16:30.540 Read Only: No 00:16:30.540 Volatile Memory Backup: OK 00:16:30.540 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:30.540 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:30.540 Available Spare: 0% 00:16:30.540 Available Sp[2024-10-07 13:27:11.978891] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:30.540 [2024-10-07 13:27:11.986680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:30.540 [2024-10-07 13:27:11.986737] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:16:30.540 [2024-10-07 13:27:11.986756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.540 [2024-10-07 13:27:11.986767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.540 [2024-10-07 13:27:11.986777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.540 [2024-10-07 13:27:11.986787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:30.540 [2024-10-07 13:27:11.986851] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:30.540 [2024-10-07 13:27:11.986873] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:30.540 [2024-10-07 13:27:11.987862] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:30.540 [2024-10-07 13:27:11.987936] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:16:30.540 [2024-10-07 13:27:11.987967] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:16:30.540 [2024-10-07 13:27:11.988870] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:30.540 [2024-10-07 13:27:11.988896] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:16:30.540 [2024-10-07 13:27:11.988957] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:30.540 [2024-10-07 13:27:11.990155] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:30.540 are Threshold: 0% 00:16:30.540 Life Percentage Used: 0% 00:16:30.540 Data Units Read: 0 00:16:30.540 Data Units Written: 0 00:16:30.540 Host Read Commands: 0 00:16:30.540 Host Write Commands: 0 00:16:30.540 Controller Busy Time: 0 minutes 00:16:30.540 Power Cycles: 0 00:16:30.540 Power On Hours: 0 hours 00:16:30.540 Unsafe Shutdowns: 0 00:16:30.540 Unrecoverable Media Errors: 0 00:16:30.540 Lifetime Error Log Entries: 0 00:16:30.540 Warning Temperature Time: 0 minutes 00:16:30.540 Critical Temperature Time: 0 minutes 00:16:30.540 00:16:30.540 Number of Queues 00:16:30.540 ================ 00:16:30.540 Number of I/O Submission Queues: 127 00:16:30.540 Number of I/O Completion Queues: 127 00:16:30.540 00:16:30.540 Active Namespaces 00:16:30.540 ================= 00:16:30.540 Namespace ID:1 00:16:30.540 Error Recovery Timeout: Unlimited 00:16:30.540 Command Set Identifier: NVM (00h) 00:16:30.540 Deallocate: Supported 00:16:30.540 Deallocated/Unwritten Error: Not Supported 00:16:30.540 Deallocated Read Value: Unknown 00:16:30.540 Deallocate in Write Zeroes: Not Supported 00:16:30.540 Deallocated Guard Field: 0xFFFF 00:16:30.540 Flush: Supported 00:16:30.540 Reservation: Supported 00:16:30.540 Namespace Sharing Capabilities: Multiple Controllers 00:16:30.540 Size (in LBAs): 131072 (0GiB) 00:16:30.540 Capacity (in LBAs): 131072 (0GiB) 00:16:30.540 Utilization (in LBAs): 131072 (0GiB) 00:16:30.540 NGUID: A643E70A6FDD482AABF9A401A2FC2BFF 00:16:30.540 UUID: a643e70a-6fdd-482a-abf9-a401a2fc2bff 00:16:30.540 Thin Provisioning: Not Supported 00:16:30.540 Per-NS Atomic Units: Yes 00:16:30.540 Atomic Boundary Size (Normal): 0 00:16:30.540 Atomic Boundary Size (PFail): 0 00:16:30.540 Atomic Boundary Offset: 0 00:16:30.540 Maximum Single Source Range Length: 65535 00:16:30.540 Maximum Copy Length: 65535 00:16:30.540 Maximum Source Range Count: 1 00:16:30.540 NGUID/EUI64 Never Reused: No 00:16:30.540 Namespace Write Protected: No 00:16:30.540 Number of LBA Formats: 1 00:16:30.540 Current LBA Format: LBA Format #00 00:16:30.540 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:30.540 00:16:30.540 13:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:30.540 [2024-10-07 13:27:12.218472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:35.802 Initializing NVMe Controllers 00:16:35.802 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:35.802 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:35.802 Initialization complete. Launching workers. 00:16:35.802 ======================================================== 00:16:35.802 Latency(us) 00:16:35.802 Device Information : IOPS MiB/s Average min max 00:16:35.802 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33335.23 130.22 3838.98 1166.61 7366.12 00:16:35.802 ======================================================== 00:16:35.802 Total : 33335.23 130.22 3838.98 1166.61 7366.12 00:16:35.802 00:16:35.802 [2024-10-07 13:27:17.330060] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:35.802 13:27:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:36.060 [2024-10-07 13:27:17.573675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.320 Initializing NVMe Controllers 00:16:41.320 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.320 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:41.320 Initialization complete. Launching workers. 00:16:41.320 ======================================================== 00:16:41.320 Latency(us) 00:16:41.320 Device Information : IOPS MiB/s Average min max 00:16:41.320 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30713.18 119.97 4168.62 1203.16 8989.84 00:16:41.320 ======================================================== 00:16:41.320 Total : 30713.18 119.97 4168.62 1203.16 8989.84 00:16:41.320 00:16:41.320 [2024-10-07 13:27:22.598685] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.320 13:27:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:41.320 [2024-10-07 13:27:22.797532] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.594 [2024-10-07 13:27:27.930818] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.594 Initializing NVMe Controllers 00:16:46.594 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.594 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:46.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:46.594 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:46.594 Initialization complete. Launching workers. 00:16:46.594 Starting thread on core 2 00:16:46.594 Starting thread on core 3 00:16:46.594 Starting thread on core 1 00:16:46.594 13:27:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:46.594 [2024-10-07 13:27:28.237217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:49.881 [2024-10-07 13:27:31.301627] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:49.881 Initializing NVMe Controllers 00:16:49.881 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.881 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:49.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:49.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:49.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:49.881 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:49.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:49.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:49.881 Initialization complete. Launching workers. 00:16:49.881 Starting thread on core 1 with urgent priority queue 00:16:49.881 Starting thread on core 2 with urgent priority queue 00:16:49.881 Starting thread on core 3 with urgent priority queue 00:16:49.881 Starting thread on core 0 with urgent priority queue 00:16:49.881 SPDK bdev Controller (SPDK2 ) core 0: 4992.00 IO/s 20.03 secs/100000 ios 00:16:49.881 SPDK bdev Controller (SPDK2 ) core 1: 5063.33 IO/s 19.75 secs/100000 ios 00:16:49.881 SPDK bdev Controller (SPDK2 ) core 2: 5176.33 IO/s 19.32 secs/100000 ios 00:16:49.881 SPDK bdev Controller (SPDK2 ) core 3: 5264.33 IO/s 19.00 secs/100000 ios 00:16:49.881 ======================================================== 00:16:49.881 00:16:49.881 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:50.142 [2024-10-07 13:27:31.608187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:50.142 Initializing NVMe Controllers 00:16:50.142 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:50.142 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:50.142 Namespace ID: 1 size: 0GB 00:16:50.142 Initialization complete. 00:16:50.142 INFO: using host memory buffer for IO 00:16:50.142 Hello world! 00:16:50.142 [2024-10-07 13:27:31.617235] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:50.142 13:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:50.402 [2024-10-07 13:27:31.917058] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:51.337 Initializing NVMe Controllers 00:16:51.337 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.337 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:51.337 Initialization complete. Launching workers. 00:16:51.337 submit (in ns) avg, min, max = 7411.2, 3514.4, 4021796.7 00:16:51.337 complete (in ns) avg, min, max = 26528.2, 2062.2, 5002818.9 00:16:51.337 00:16:51.337 Submit histogram 00:16:51.337 ================ 00:16:51.337 Range in us Cumulative Count 00:16:51.337 3.508 - 3.532: 0.4131% ( 53) 00:16:51.337 3.532 - 3.556: 1.3329% ( 118) 00:16:51.337 3.556 - 3.579: 4.1546% ( 362) 00:16:51.337 3.579 - 3.603: 7.8572% ( 475) 00:16:51.337 3.603 - 3.627: 15.0674% ( 925) 00:16:51.337 3.627 - 3.650: 22.0204% ( 892) 00:16:51.337 3.650 - 3.674: 30.0023% ( 1024) 00:16:51.337 3.674 - 3.698: 36.9943% ( 897) 00:16:51.337 3.698 - 3.721: 44.5553% ( 970) 00:16:51.337 3.721 - 3.745: 51.0406% ( 832) 00:16:51.337 3.745 - 3.769: 55.6240% ( 588) 00:16:51.337 3.769 - 3.793: 60.0826% ( 572) 00:16:51.337 3.793 - 3.816: 63.7150% ( 466) 00:16:51.337 3.816 - 3.840: 67.3708% ( 469) 00:16:51.337 3.840 - 3.864: 71.6658% ( 551) 00:16:51.337 3.864 - 3.887: 75.2592% ( 461) 00:16:51.337 3.887 - 3.911: 79.0007% ( 480) 00:16:51.337 3.911 - 3.935: 82.2278% ( 414) 00:16:51.337 3.935 - 3.959: 84.7923% ( 329) 00:16:51.337 3.959 - 3.982: 87.1853% ( 307) 00:16:51.337 3.982 - 4.006: 88.8144% ( 209) 00:16:51.337 4.006 - 4.030: 90.2175% ( 180) 00:16:51.337 4.030 - 4.053: 91.3711% ( 148) 00:16:51.337 4.053 - 4.077: 92.2831% ( 117) 00:16:51.337 4.077 - 4.101: 93.1172% ( 107) 00:16:51.337 4.101 - 4.124: 93.8109% ( 89) 00:16:51.337 4.124 - 4.148: 94.5280% ( 92) 00:16:51.337 4.148 - 4.172: 95.1360% ( 78) 00:16:51.337 4.172 - 4.196: 95.5024% ( 47) 00:16:51.337 4.196 - 4.219: 95.9233% ( 54) 00:16:51.337 4.219 - 4.243: 96.2117% ( 37) 00:16:51.337 4.243 - 4.267: 96.4300% ( 28) 00:16:51.337 4.267 - 4.290: 96.5859% ( 20) 00:16:51.337 4.290 - 4.314: 96.7028% ( 15) 00:16:51.337 4.314 - 4.338: 96.8275% ( 16) 00:16:51.337 4.338 - 4.361: 96.9366% ( 14) 00:16:51.337 4.361 - 4.385: 97.0380% ( 13) 00:16:51.337 4.385 - 4.409: 97.1237% ( 11) 00:16:51.337 4.409 - 4.433: 97.2017% ( 10) 00:16:51.337 4.433 - 4.456: 97.2718% ( 9) 00:16:51.337 4.456 - 4.480: 97.3420% ( 9) 00:16:51.337 4.480 - 4.504: 97.3653% ( 3) 00:16:51.337 4.504 - 4.527: 97.3887% ( 3) 00:16:51.337 4.527 - 4.551: 97.4199% ( 4) 00:16:51.337 4.575 - 4.599: 97.4355% ( 2) 00:16:51.337 4.599 - 4.622: 97.4511% ( 2) 00:16:51.337 4.646 - 4.670: 97.4745% ( 3) 00:16:51.337 4.670 - 4.693: 97.4823% ( 1) 00:16:51.337 4.693 - 4.717: 97.4901% ( 1) 00:16:51.337 4.717 - 4.741: 97.4979% ( 1) 00:16:51.337 4.741 - 4.764: 97.5057% ( 1) 00:16:51.337 4.788 - 4.812: 97.5134% ( 1) 00:16:51.337 4.812 - 4.836: 97.5368% ( 3) 00:16:51.337 4.836 - 4.859: 97.5914% ( 7) 00:16:51.337 4.859 - 4.883: 97.5992% ( 1) 00:16:51.337 4.883 - 4.907: 97.6226% ( 3) 00:16:51.337 4.907 - 4.930: 97.6615% ( 5) 00:16:51.337 4.930 - 4.954: 97.7005% ( 5) 00:16:51.337 4.954 - 4.978: 97.7473% ( 6) 00:16:51.337 4.978 - 5.001: 97.8408% ( 12) 00:16:51.338 5.001 - 5.025: 97.8798% ( 5) 00:16:51.338 5.025 - 5.049: 97.8954% ( 2) 00:16:51.338 5.049 - 5.073: 97.9811% ( 11) 00:16:51.338 5.073 - 5.096: 98.0123% ( 4) 00:16:51.338 5.096 - 5.120: 98.0357% ( 3) 00:16:51.338 5.120 - 5.144: 98.0981% ( 8) 00:16:51.338 5.144 - 5.167: 98.1059% ( 1) 00:16:51.338 5.167 - 5.191: 98.1370% ( 4) 00:16:51.338 5.191 - 5.215: 98.1604% ( 3) 00:16:51.338 5.215 - 5.239: 98.1682% ( 1) 00:16:51.338 5.239 - 5.262: 98.2228% ( 7) 00:16:51.338 5.262 - 5.286: 98.2773% ( 7) 00:16:51.338 5.286 - 5.310: 98.2851% ( 1) 00:16:51.338 5.310 - 5.333: 98.2929% ( 1) 00:16:51.338 5.333 - 5.357: 98.3163% ( 3) 00:16:51.338 5.357 - 5.381: 98.3397% ( 3) 00:16:51.338 5.381 - 5.404: 98.3475% ( 1) 00:16:51.338 5.641 - 5.665: 98.3553% ( 1) 00:16:51.338 5.760 - 5.784: 98.3631% ( 1) 00:16:51.338 5.926 - 5.950: 98.3709% ( 1) 00:16:51.338 5.973 - 5.997: 98.3787% ( 1) 00:16:51.338 6.068 - 6.116: 98.3865% ( 1) 00:16:51.338 6.163 - 6.210: 98.4021% ( 2) 00:16:51.338 6.210 - 6.258: 98.4254% ( 3) 00:16:51.338 6.305 - 6.353: 98.4332% ( 1) 00:16:51.338 6.353 - 6.400: 98.4410% ( 1) 00:16:51.338 6.400 - 6.447: 98.4488% ( 1) 00:16:51.338 6.590 - 6.637: 98.4566% ( 1) 00:16:51.338 7.064 - 7.111: 98.4644% ( 1) 00:16:51.338 7.253 - 7.301: 98.4722% ( 1) 00:16:51.338 7.348 - 7.396: 98.4800% ( 1) 00:16:51.338 7.396 - 7.443: 98.4878% ( 1) 00:16:51.338 7.490 - 7.538: 98.4956% ( 1) 00:16:51.338 7.585 - 7.633: 98.5034% ( 1) 00:16:51.338 7.680 - 7.727: 98.5112% ( 1) 00:16:51.338 7.870 - 7.917: 98.5268% ( 2) 00:16:51.338 7.917 - 7.964: 98.5346% ( 1) 00:16:51.338 8.439 - 8.486: 98.5424% ( 1) 00:16:51.338 8.486 - 8.533: 98.5502% ( 1) 00:16:51.338 8.628 - 8.676: 98.5735% ( 3) 00:16:51.338 8.676 - 8.723: 98.5813% ( 1) 00:16:51.338 8.865 - 8.913: 98.5891% ( 1) 00:16:51.338 8.960 - 9.007: 98.6125% ( 3) 00:16:51.338 9.055 - 9.102: 98.6281% ( 2) 00:16:51.338 9.102 - 9.150: 98.6359% ( 1) 00:16:51.338 9.197 - 9.244: 98.6515% ( 2) 00:16:51.338 9.244 - 9.292: 98.6593% ( 1) 00:16:51.338 9.766 - 9.813: 98.6749% ( 2) 00:16:51.338 10.098 - 10.145: 98.6827% ( 1) 00:16:51.338 10.193 - 10.240: 98.6905% ( 1) 00:16:51.338 10.430 - 10.477: 98.6983% ( 1) 00:16:51.338 10.619 - 10.667: 98.7061% ( 1) 00:16:51.338 10.761 - 10.809: 98.7139% ( 1) 00:16:51.338 10.809 - 10.856: 98.7216% ( 1) 00:16:51.338 11.093 - 11.141: 98.7372% ( 2) 00:16:51.338 11.188 - 11.236: 98.7450% ( 1) 00:16:51.338 11.662 - 11.710: 98.7528% ( 1) 00:16:51.338 11.947 - 11.994: 98.7606% ( 1) 00:16:51.338 12.136 - 12.231: 98.7684% ( 1) 00:16:51.338 12.326 - 12.421: 98.7762% ( 1) 00:16:51.338 12.421 - 12.516: 98.7840% ( 1) 00:16:51.338 12.800 - 12.895: 98.7918% ( 1) 00:16:51.338 12.895 - 12.990: 98.7996% ( 1) 00:16:51.338 13.179 - 13.274: 98.8074% ( 1) 00:16:51.338 13.369 - 13.464: 98.8152% ( 1) 00:16:51.338 13.653 - 13.748: 98.8308% ( 2) 00:16:51.338 13.843 - 13.938: 98.8386% ( 1) 00:16:51.338 14.791 - 14.886: 98.8464% ( 1) 00:16:51.338 16.213 - 16.308: 98.8542% ( 1) 00:16:51.338 17.067 - 17.161: 98.8620% ( 1) 00:16:51.338 17.256 - 17.351: 98.8931% ( 4) 00:16:51.338 17.351 - 17.446: 98.9087% ( 2) 00:16:51.338 17.446 - 17.541: 98.9165% ( 1) 00:16:51.338 17.541 - 17.636: 98.9789% ( 8) 00:16:51.338 17.636 - 17.730: 99.0646% ( 11) 00:16:51.338 17.730 - 17.825: 99.1036% ( 5) 00:16:51.338 17.825 - 17.920: 99.1504% ( 6) 00:16:51.338 17.920 - 18.015: 99.1971% ( 6) 00:16:51.338 18.015 - 18.110: 99.2439% ( 6) 00:16:51.338 18.110 - 18.204: 99.2985% ( 7) 00:16:51.338 18.204 - 18.299: 99.3452% ( 6) 00:16:51.338 18.299 - 18.394: 99.4154% ( 9) 00:16:51.338 18.394 - 18.489: 99.4933% ( 10) 00:16:51.338 18.489 - 18.584: 99.6025% ( 14) 00:16:51.338 18.584 - 18.679: 99.6492% ( 6) 00:16:51.338 18.679 - 18.773: 99.6960% ( 6) 00:16:51.338 18.773 - 18.868: 99.7428% ( 6) 00:16:51.338 18.868 - 18.963: 99.7895% ( 6) 00:16:51.338 18.963 - 19.058: 99.7973% ( 1) 00:16:51.338 19.058 - 19.153: 99.8051% ( 1) 00:16:51.338 19.153 - 19.247: 99.8129% ( 1) 00:16:51.338 19.247 - 19.342: 99.8207% ( 1) 00:16:51.338 19.437 - 19.532: 99.8285% ( 1) 00:16:51.338 19.532 - 19.627: 99.8363% ( 1) 00:16:51.338 20.480 - 20.575: 99.8441% ( 1) 00:16:51.338 20.859 - 20.954: 99.8519% ( 1) 00:16:51.338 21.333 - 21.428: 99.8597% ( 1) 00:16:51.338 21.902 - 21.997: 99.8675% ( 1) 00:16:51.338 21.997 - 22.092: 99.8753% ( 1) 00:16:51.338 22.661 - 22.756: 99.8831% ( 1) 00:16:51.338 23.799 - 23.893: 99.8909% ( 1) 00:16:51.338 25.790 - 25.979: 99.8987% ( 1) 00:16:51.338 33.754 - 33.944: 99.9065% ( 1) 00:16:51.338 36.030 - 36.219: 99.9143% ( 1) 00:16:51.338 3980.705 - 4004.978: 99.9376% ( 3) 00:16:51.338 4004.978 - 4029.250: 100.0000% ( 8) 00:16:51.338 00:16:51.338 Complete histogram 00:16:51.338 ================== 00:16:51.338 Range in us Cumulative Count 00:16:51.338 2.062 - 2.074: 9.3928% ( 1205) 00:16:51.338 2.074 - 2.086: 43.5264% ( 4379) 00:16:51.338 2.086 - 2.098: 47.6265% ( 526) 00:16:51.338 2.098 - 2.110: 51.5862% ( 508) 00:16:51.338 2.110 - 2.121: 58.2508% ( 855) 00:16:51.338 2.121 - 2.133: 60.7608% ( 322) 00:16:51.338 2.133 - 2.145: 69.0389% ( 1062) 00:16:51.338 2.145 - 2.157: 76.6934% ( 982) 00:16:51.338 2.157 - 2.169: 77.5820% ( 114) 00:16:51.338 2.169 - 2.181: 79.7568% ( 279) 00:16:51.338 2.181 - 2.193: 81.7289% ( 253) 00:16:51.338 2.193 - 2.204: 82.6097% ( 113) 00:16:51.338 2.204 - 2.216: 85.1820% ( 330) 00:16:51.338 2.216 - 2.228: 88.3701% ( 409) 00:16:51.338 2.228 - 2.240: 90.1473% ( 228) 00:16:51.338 2.240 - 2.252: 92.0415% ( 243) 00:16:51.338 2.252 - 2.264: 93.1250% ( 139) 00:16:51.338 2.264 - 2.276: 93.4056% ( 36) 00:16:51.338 2.276 - 2.287: 93.7641% ( 46) 00:16:51.338 2.287 - 2.299: 94.0837% ( 41) 00:16:51.338 2.299 - 2.311: 94.7229% ( 82) 00:16:51.338 2.311 - 2.323: 95.2451% ( 67) 00:16:51.338 2.323 - 2.335: 95.3309% ( 11) 00:16:51.338 2.335 - 2.347: 95.3543% ( 3) 00:16:51.338 2.347 - 2.359: 95.3777% ( 3) 00:16:51.338 2.359 - 2.370: 95.4556% ( 10) 00:16:51.338 2.370 - 2.382: 95.5725% ( 15) 00:16:51.338 2.382 - 2.394: 95.8609% ( 37) 00:16:51.338 2.394 - 2.406: 96.1416% ( 36) 00:16:51.338 2.406 - 2.418: 96.3286% ( 24) 00:16:51.338 2.418 - 2.430: 96.4767% ( 19) 00:16:51.338 2.430 - 2.441: 96.6716% ( 25) 00:16:51.338 2.441 - 2.453: 96.8275% ( 20) 00:16:51.338 2.453 - 2.465: 97.0068% ( 23) 00:16:51.338 2.465 - 2.477: 97.2172% ( 27) 00:16:51.338 2.477 - 2.489: 97.3887% ( 22) 00:16:51.338 2.489 - 2.501: 97.5212% ( 17) 00:16:51.338 2.501 - 2.513: 97.7395% ( 28) 00:16:51.338 2.513 - 2.524: 97.8330% ( 12) 00:16:51.338 2.524 - 2.536: 97.9110% ( 10) 00:16:51.338 2.536 - 2.548: 98.0123% ( 13) 00:16:51.338 2.548 - 2.560: 98.0825% ( 9) 00:16:51.338 2.560 - 2.572: 98.1059% ( 3) 00:16:51.338 2.572 - 2.584: 98.1682% ( 8) 00:16:51.338 2.584 - 2.596: 98.1994% ( 4) 00:16:51.338 2.596 - 2.607: 98.2384% ( 5) 00:16:51.338 2.619 - 2.631: 98.2462% ( 1) 00:16:51.338 2.631 - 2.643: 98.2540% ( 1) 00:16:51.338 2.643 - 2.655: 98.2618% ( 1) 00:16:51.338 2.667 - 2.679: 98.2773% ( 2) 00:16:51.338 2.714 - 2.726: 98.2851% ( 1) 00:16:51.338 2.738 - 2.750: 98.2929% ( 1) 00:16:51.338 2.761 - 2.773: 98.3007% ( 1) 00:16:51.338 2.785 - 2.797: 98.3241% ( 3) 00:16:51.338 2.797 - 2.809: 98.3319% ( 1) 00:16:51.338 2.856 - 2.868: 98.3397% ( 1) 00:16:51.338 2.880 - 2.892: 98.3475% ( 1) 00:16:51.338 2.904 - 2.916: 98.3553% ( 1) 00:16:51.338 3.461 - 3.484: 98.3631% ( 1) 00:16:51.338 3.579 - 3.603: 98.3787% ( 2) 00:16:51.338 3.603 - 3.627: 98.3943% ( 2) 00:16:51.338 3.627 - 3.650: 98.4021% ( 1) 00:16:51.338 3.650 - 3.674: 98.4099% ( 1) 00:16:51.338 3.698 - 3.721: 98.4176% ( 1) 00:16:51.338 3.721 - 3.745: 98.4254% ( 1) 00:16:51.338 3.793 - 3.816: 98.4332% ( 1) 00:16:51.338 3.840 - 3.864: 98.4566% ( 3) 00:16:51.338 3.864 - 3.887: 98.4644% ( 1) 00:16:51.338 3.887 - 3.911: 98.4722% ( 1) 00:16:51.338 3.911 - 3.935: 98.4878% ( 2) 00:16:51.338 4.030 - 4.053: 98.4956% ( 1) 00:16:51.338 4.077 - 4.101: 98.5190% ( 3) 00:16:51.338 4.124 - 4.148: 98.5346% ( 2) 00:16:51.338 4.290 - 4.314: 98.5424% ( 1) 00:16:51.338 4.433 - 4.456: 98.5502% ( 1) 00:16:51.338 4.527 - 4.551: 98.5580% ( 1) 00:16:51.338 5.760 - 5.784: 98.5657% ( 1) 00:16:51.338 5.784 - 5.807: 98.5735% ( 1) 00:16:51.338 5.807 - 5.831: 98.5813% ( 1) 00:16:51.338 5.973 - 5.997: 98.5969% ( 2) 00:16:51.338 6.116 - 6.163: 98.6125% ( 2) 00:16:51.338 6.400 - 6.447: 98.6203% ( 1) 00:16:51.338 6.495 - 6.542: 98.6281% ( 1) 00:16:51.338 6.732 - 6.779: 98.6359% ( 1) 00:16:51.338 6.827 - 6.874: 98.6437% ( 1) 00:16:51.339 6.969 - 7.016: 9[2024-10-07 13:27:33.017488] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:51.597 8.6593% ( 2) 00:16:51.598 7.016 - 7.064: 98.6671% ( 1) 00:16:51.598 7.064 - 7.111: 98.6749% ( 1) 00:16:51.598 7.111 - 7.159: 98.6827% ( 1) 00:16:51.598 7.159 - 7.206: 98.6905% ( 1) 00:16:51.598 7.206 - 7.253: 98.7061% ( 2) 00:16:51.598 7.253 - 7.301: 98.7139% ( 1) 00:16:51.598 7.443 - 7.490: 98.7216% ( 1) 00:16:51.598 7.680 - 7.727: 98.7294% ( 1) 00:16:51.598 8.154 - 8.201: 98.7372% ( 1) 00:16:51.598 8.249 - 8.296: 98.7450% ( 1) 00:16:51.598 9.055 - 9.102: 98.7528% ( 1) 00:16:51.598 15.360 - 15.455: 98.7606% ( 1) 00:16:51.598 15.550 - 15.644: 98.7684% ( 1) 00:16:51.598 15.739 - 15.834: 98.7762% ( 1) 00:16:51.598 15.834 - 15.929: 98.8308% ( 7) 00:16:51.598 15.929 - 16.024: 98.8697% ( 5) 00:16:51.598 16.024 - 16.119: 98.9165% ( 6) 00:16:51.598 16.119 - 16.213: 98.9633% ( 6) 00:16:51.598 16.213 - 16.308: 99.0023% ( 5) 00:16:51.598 16.308 - 16.403: 99.0334% ( 4) 00:16:51.598 16.403 - 16.498: 99.0802% ( 6) 00:16:51.598 16.498 - 16.593: 99.1192% ( 5) 00:16:51.598 16.593 - 16.687: 99.1504% ( 4) 00:16:51.598 16.687 - 16.782: 99.1737% ( 3) 00:16:51.598 16.782 - 16.877: 99.2049% ( 4) 00:16:51.598 16.877 - 16.972: 99.2127% ( 1) 00:16:51.598 16.972 - 17.067: 99.2283% ( 2) 00:16:51.598 17.067 - 17.161: 99.2439% ( 2) 00:16:51.598 17.161 - 17.256: 99.2751% ( 4) 00:16:51.598 17.541 - 17.636: 99.2829% ( 1) 00:16:51.598 17.636 - 17.730: 99.2985% ( 2) 00:16:51.598 17.730 - 17.825: 99.3063% ( 1) 00:16:51.598 17.920 - 18.015: 99.3141% ( 1) 00:16:51.598 18.015 - 18.110: 99.3296% ( 2) 00:16:51.598 18.204 - 18.299: 99.3452% ( 2) 00:16:51.598 18.394 - 18.489: 99.3608% ( 2) 00:16:51.598 18.489 - 18.584: 99.3686% ( 1) 00:16:51.598 18.584 - 18.679: 99.3764% ( 1) 00:16:51.598 19.058 - 19.153: 99.3842% ( 1) 00:16:51.598 20.006 - 20.101: 99.3920% ( 1) 00:16:51.598 2730.667 - 2742.803: 99.3998% ( 1) 00:16:51.598 3665.161 - 3689.434: 99.4076% ( 1) 00:16:51.598 3980.705 - 4004.978: 99.8129% ( 52) 00:16:51.598 4004.978 - 4029.250: 99.9922% ( 23) 00:16:51.598 5000.154 - 5024.427: 100.0000% ( 1) 00:16:51.598 00:16:51.598 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:51.598 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:51.598 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:51.598 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:51.598 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:51.856 [ 00:16:51.856 { 00:16:51.856 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:51.856 "subtype": "Discovery", 00:16:51.856 "listen_addresses": [], 00:16:51.856 "allow_any_host": true, 00:16:51.856 "hosts": [] 00:16:51.856 }, 00:16:51.856 { 00:16:51.856 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:51.856 "subtype": "NVMe", 00:16:51.856 "listen_addresses": [ 00:16:51.856 { 00:16:51.856 "trtype": "VFIOUSER", 00:16:51.856 "adrfam": "IPv4", 00:16:51.856 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:51.856 "trsvcid": "0" 00:16:51.856 } 00:16:51.856 ], 00:16:51.856 "allow_any_host": true, 00:16:51.856 "hosts": [], 00:16:51.856 "serial_number": "SPDK1", 00:16:51.856 "model_number": "SPDK bdev Controller", 00:16:51.856 "max_namespaces": 32, 00:16:51.856 "min_cntlid": 1, 00:16:51.856 "max_cntlid": 65519, 00:16:51.856 "namespaces": [ 00:16:51.856 { 00:16:51.856 "nsid": 1, 00:16:51.856 "bdev_name": "Malloc1", 00:16:51.856 "name": "Malloc1", 00:16:51.856 "nguid": "3ED519B02E834CBBB841F19AA3A57A87", 00:16:51.856 "uuid": "3ed519b0-2e83-4cbb-b841-f19aa3a57a87" 00:16:51.856 }, 00:16:51.856 { 00:16:51.856 "nsid": 2, 00:16:51.856 "bdev_name": "Malloc3", 00:16:51.856 "name": "Malloc3", 00:16:51.856 "nguid": "E231E5CF4271470FB4F4403B03755AC1", 00:16:51.856 "uuid": "e231e5cf-4271-470f-b4f4-403b03755ac1" 00:16:51.856 } 00:16:51.856 ] 00:16:51.856 }, 00:16:51.856 { 00:16:51.856 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:51.856 "subtype": "NVMe", 00:16:51.856 "listen_addresses": [ 00:16:51.856 { 00:16:51.856 "trtype": "VFIOUSER", 00:16:51.856 "adrfam": "IPv4", 00:16:51.856 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:51.856 "trsvcid": "0" 00:16:51.856 } 00:16:51.856 ], 00:16:51.856 "allow_any_host": true, 00:16:51.856 "hosts": [], 00:16:51.856 "serial_number": "SPDK2", 00:16:51.856 "model_number": "SPDK bdev Controller", 00:16:51.856 "max_namespaces": 32, 00:16:51.856 "min_cntlid": 1, 00:16:51.856 "max_cntlid": 65519, 00:16:51.856 "namespaces": [ 00:16:51.856 { 00:16:51.856 "nsid": 1, 00:16:51.856 "bdev_name": "Malloc2", 00:16:51.856 "name": "Malloc2", 00:16:51.856 "nguid": "A643E70A6FDD482AABF9A401A2FC2BFF", 00:16:51.856 "uuid": "a643e70a-6fdd-482a-abf9-a401a2fc2bff" 00:16:51.856 } 00:16:51.856 ] 00:16:51.856 } 00:16:51.856 ] 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1786500 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:51.856 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:51.856 [2024-10-07 13:27:33.554164] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:52.114 Malloc4 00:16:52.114 13:27:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:52.372 [2024-10-07 13:27:33.987401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:52.372 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:52.372 Asynchronous Event Request test 00:16:52.372 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:52.372 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:52.372 Registering asynchronous event callbacks... 00:16:52.372 Starting namespace attribute notice tests for all controllers... 00:16:52.372 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:52.372 aer_cb - Changed Namespace 00:16:52.372 Cleaning up... 00:16:52.632 [ 00:16:52.632 { 00:16:52.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:52.632 "subtype": "Discovery", 00:16:52.632 "listen_addresses": [], 00:16:52.632 "allow_any_host": true, 00:16:52.632 "hosts": [] 00:16:52.632 }, 00:16:52.632 { 00:16:52.632 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:52.632 "subtype": "NVMe", 00:16:52.632 "listen_addresses": [ 00:16:52.632 { 00:16:52.632 "trtype": "VFIOUSER", 00:16:52.632 "adrfam": "IPv4", 00:16:52.632 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:52.632 "trsvcid": "0" 00:16:52.632 } 00:16:52.632 ], 00:16:52.632 "allow_any_host": true, 00:16:52.632 "hosts": [], 00:16:52.632 "serial_number": "SPDK1", 00:16:52.632 "model_number": "SPDK bdev Controller", 00:16:52.632 "max_namespaces": 32, 00:16:52.632 "min_cntlid": 1, 00:16:52.632 "max_cntlid": 65519, 00:16:52.632 "namespaces": [ 00:16:52.632 { 00:16:52.632 "nsid": 1, 00:16:52.632 "bdev_name": "Malloc1", 00:16:52.632 "name": "Malloc1", 00:16:52.632 "nguid": "3ED519B02E834CBBB841F19AA3A57A87", 00:16:52.632 "uuid": "3ed519b0-2e83-4cbb-b841-f19aa3a57a87" 00:16:52.632 }, 00:16:52.632 { 00:16:52.632 "nsid": 2, 00:16:52.632 "bdev_name": "Malloc3", 00:16:52.632 "name": "Malloc3", 00:16:52.632 "nguid": "E231E5CF4271470FB4F4403B03755AC1", 00:16:52.632 "uuid": "e231e5cf-4271-470f-b4f4-403b03755ac1" 00:16:52.632 } 00:16:52.632 ] 00:16:52.632 }, 00:16:52.632 { 00:16:52.632 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:52.632 "subtype": "NVMe", 00:16:52.632 "listen_addresses": [ 00:16:52.632 { 00:16:52.632 "trtype": "VFIOUSER", 00:16:52.632 "adrfam": "IPv4", 00:16:52.632 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:52.632 "trsvcid": "0" 00:16:52.632 } 00:16:52.632 ], 00:16:52.632 "allow_any_host": true, 00:16:52.632 "hosts": [], 00:16:52.632 "serial_number": "SPDK2", 00:16:52.632 "model_number": "SPDK bdev Controller", 00:16:52.632 "max_namespaces": 32, 00:16:52.632 "min_cntlid": 1, 00:16:52.632 "max_cntlid": 65519, 00:16:52.632 "namespaces": [ 00:16:52.632 { 00:16:52.632 "nsid": 1, 00:16:52.632 "bdev_name": "Malloc2", 00:16:52.632 "name": "Malloc2", 00:16:52.632 "nguid": "A643E70A6FDD482AABF9A401A2FC2BFF", 00:16:52.632 "uuid": "a643e70a-6fdd-482a-abf9-a401a2fc2bff" 00:16:52.632 }, 00:16:52.632 { 00:16:52.632 "nsid": 2, 00:16:52.632 "bdev_name": "Malloc4", 00:16:52.632 "name": "Malloc4", 00:16:52.632 "nguid": "447FCF60171A497E9D6F6F44DE534601", 00:16:52.632 "uuid": "447fcf60-171a-497e-9d6f-6f44de534601" 00:16:52.632 } 00:16:52.632 ] 00:16:52.632 } 00:16:52.632 ] 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1786500 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1780536 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1780536 ']' 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1780536 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1780536 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1780536' 00:16:52.632 killing process with pid 1780536 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1780536 00:16:52.632 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1780536 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1786680 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1786680' 00:16:53.201 Process pid: 1786680 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1786680 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 1786680 ']' 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:53.201 13:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:53.201 [2024-10-07 13:27:34.733954] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:53.201 [2024-10-07 13:27:34.734986] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:53.201 [2024-10-07 13:27:34.735044] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:53.201 [2024-10-07 13:27:34.789487] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:53.201 [2024-10-07 13:27:34.891049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:53.201 [2024-10-07 13:27:34.891129] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:53.201 [2024-10-07 13:27:34.891144] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:53.201 [2024-10-07 13:27:34.891156] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:53.201 [2024-10-07 13:27:34.891180] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:53.201 [2024-10-07 13:27:34.892615] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.201 [2024-10-07 13:27:34.892745] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:53.201 [2024-10-07 13:27:34.892788] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:16:53.201 [2024-10-07 13:27:34.892792] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.460 [2024-10-07 13:27:34.985240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:53.460 [2024-10-07 13:27:34.985489] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:53.460 [2024-10-07 13:27:34.985778] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:53.460 [2024-10-07 13:27:34.986453] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:53.460 [2024-10-07 13:27:34.986719] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:53.460 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:53.460 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:16:53.460 13:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:54.396 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:54.654 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:54.654 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:54.654 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.654 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:54.654 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:54.912 Malloc1 00:16:54.912 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:55.170 13:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:55.737 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:55.996 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:55.996 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:55.996 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:56.254 Malloc2 00:16:56.254 13:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:56.512 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:56.771 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1786680 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 1786680 ']' 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 1786680 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1786680 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1786680' 00:16:57.029 killing process with pid 1786680 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 1786680 00:16:57.029 13:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 1786680 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:57.597 00:16:57.597 real 0m53.402s 00:16:57.597 user 3m25.999s 00:16:57.597 sys 0m4.009s 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:57.597 ************************************ 00:16:57.597 END TEST nvmf_vfio_user 00:16:57.597 ************************************ 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.597 ************************************ 00:16:57.597 START TEST nvmf_vfio_user_nvme_compliance 00:16:57.597 ************************************ 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:57.597 * Looking for test storage... 00:16:57.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lcov --version 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.597 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:57.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.597 --rc genhtml_branch_coverage=1 00:16:57.597 --rc genhtml_function_coverage=1 00:16:57.597 --rc genhtml_legend=1 00:16:57.597 --rc geninfo_all_blocks=1 00:16:57.598 --rc geninfo_unexecuted_blocks=1 00:16:57.598 00:16:57.598 ' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:57.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.598 --rc genhtml_branch_coverage=1 00:16:57.598 --rc genhtml_function_coverage=1 00:16:57.598 --rc genhtml_legend=1 00:16:57.598 --rc geninfo_all_blocks=1 00:16:57.598 --rc geninfo_unexecuted_blocks=1 00:16:57.598 00:16:57.598 ' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:57.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.598 --rc genhtml_branch_coverage=1 00:16:57.598 --rc genhtml_function_coverage=1 00:16:57.598 --rc genhtml_legend=1 00:16:57.598 --rc geninfo_all_blocks=1 00:16:57.598 --rc geninfo_unexecuted_blocks=1 00:16:57.598 00:16:57.598 ' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:57.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.598 --rc genhtml_branch_coverage=1 00:16:57.598 --rc genhtml_function_coverage=1 00:16:57.598 --rc genhtml_legend=1 00:16:57.598 --rc geninfo_all_blocks=1 00:16:57.598 --rc geninfo_unexecuted_blocks=1 00:16:57.598 00:16:57.598 ' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1787335 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1787335' 00:16:57.598 Process pid: 1787335 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1787335 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 1787335 ']' 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:57.598 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:57.598 [2024-10-07 13:27:39.300198] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:16:57.598 [2024-10-07 13:27:39.300291] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.859 [2024-10-07 13:27:39.356310] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.859 [2024-10-07 13:27:39.458979] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.859 [2024-10-07 13:27:39.459054] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.859 [2024-10-07 13:27:39.459082] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:57.859 [2024-10-07 13:27:39.459093] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:57.859 [2024-10-07 13:27:39.459102] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.859 [2024-10-07 13:27:39.459851] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.859 [2024-10-07 13:27:39.459910] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.859 [2024-10-07 13:27:39.459913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.119 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:58.119 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:16:58.119 13:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.058 malloc0 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.058 13:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:59.058 00:16:59.058 00:16:59.058 CUnit - A unit testing framework for C - Version 2.1-3 00:16:59.058 http://cunit.sourceforge.net/ 00:16:59.058 00:16:59.058 00:16:59.058 Suite: nvme_compliance 00:16:59.317 Test: admin_identify_ctrlr_verify_dptr ...[2024-10-07 13:27:40.807177] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.317 [2024-10-07 13:27:40.808595] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:59.317 [2024-10-07 13:27:40.808620] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:59.317 [2024-10-07 13:27:40.808646] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:59.317 [2024-10-07 13:27:40.813213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.317 passed 00:16:59.317 Test: admin_identify_ctrlr_verify_fused ...[2024-10-07 13:27:40.897836] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.317 [2024-10-07 13:27:40.900861] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.317 passed 00:16:59.317 Test: admin_identify_ns ...[2024-10-07 13:27:40.986284] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.574 [2024-10-07 13:27:41.046699] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:59.574 [2024-10-07 13:27:41.054687] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:59.574 [2024-10-07 13:27:41.075811] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.574 passed 00:16:59.574 Test: admin_get_features_mandatory_features ...[2024-10-07 13:27:41.160094] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.574 [2024-10-07 13:27:41.163114] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.574 passed 00:16:59.574 Test: admin_get_features_optional_features ...[2024-10-07 13:27:41.247625] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.574 [2024-10-07 13:27:41.252676] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.574 passed 00:16:59.832 Test: admin_set_features_number_of_queues ...[2024-10-07 13:27:41.335932] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.832 [2024-10-07 13:27:41.440806] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:59.832 passed 00:16:59.832 Test: admin_get_log_page_mandatory_logs ...[2024-10-07 13:27:41.523930] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:59.832 [2024-10-07 13:27:41.526953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.092 passed 00:17:00.092 Test: admin_get_log_page_with_lpo ...[2024-10-07 13:27:41.610155] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.092 [2024-10-07 13:27:41.677695] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:00.092 [2024-10-07 13:27:41.690776] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.092 passed 00:17:00.092 Test: fabric_property_get ...[2024-10-07 13:27:41.775004] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.092 [2024-10-07 13:27:41.776289] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:00.092 [2024-10-07 13:27:41.778041] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.350 passed 00:17:00.350 Test: admin_delete_io_sq_use_admin_qid ...[2024-10-07 13:27:41.862567] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.350 [2024-10-07 13:27:41.863913] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:00.350 [2024-10-07 13:27:41.865592] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.350 passed 00:17:00.350 Test: admin_delete_io_sq_delete_sq_twice ...[2024-10-07 13:27:41.950221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.350 [2024-10-07 13:27:42.033695] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:00.350 [2024-10-07 13:27:42.049817] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:00.350 [2024-10-07 13:27:42.054929] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.610 passed 00:17:00.610 Test: admin_delete_io_cq_use_admin_qid ...[2024-10-07 13:27:42.139645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.610 [2024-10-07 13:27:42.140970] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:00.610 [2024-10-07 13:27:42.142686] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.610 passed 00:17:00.610 Test: admin_delete_io_cq_delete_cq_first ...[2024-10-07 13:27:42.223798] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.610 [2024-10-07 13:27:42.303689] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:00.868 [2024-10-07 13:27:42.327692] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:00.868 [2024-10-07 13:27:42.332789] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.868 passed 00:17:00.868 Test: admin_create_io_cq_verify_iv_pc ...[2024-10-07 13:27:42.412327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:00.869 [2024-10-07 13:27:42.413639] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:00.869 [2024-10-07 13:27:42.413721] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:00.869 [2024-10-07 13:27:42.415351] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:00.869 passed 00:17:00.869 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-10-07 13:27:42.499774] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.126 [2024-10-07 13:27:42.593677] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:01.126 [2024-10-07 13:27:42.601682] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:01.126 [2024-10-07 13:27:42.609681] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:01.126 [2024-10-07 13:27:42.617678] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:01.126 [2024-10-07 13:27:42.646786] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.126 passed 00:17:01.126 Test: admin_create_io_sq_verify_pc ...[2024-10-07 13:27:42.729282] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:01.126 [2024-10-07 13:27:42.745693] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:01.126 [2024-10-07 13:27:42.763657] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:01.126 passed 00:17:01.386 Test: admin_create_io_qp_max_qps ...[2024-10-07 13:27:42.844227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.322 [2024-10-07 13:27:43.989684] nvme_ctrlr.c:5504:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:02.891 [2024-10-07 13:27:44.371922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:02.891 passed 00:17:02.891 Test: admin_create_io_sq_shared_cq ...[2024-10-07 13:27:44.453925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:02.891 [2024-10-07 13:27:44.585689] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:03.162 [2024-10-07 13:27:44.622760] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:03.162 passed 00:17:03.162 00:17:03.162 Run Summary: Type Total Ran Passed Failed Inactive 00:17:03.162 suites 1 1 n/a 0 0 00:17:03.162 tests 18 18 18 0 0 00:17:03.162 asserts 360 360 360 0 n/a 00:17:03.162 00:17:03.162 Elapsed time = 1.582 seconds 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1787335 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 1787335 ']' 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 1787335 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1787335 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1787335' 00:17:03.162 killing process with pid 1787335 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 1787335 00:17:03.162 13:27:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 1787335 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:03.478 00:17:03.478 real 0m5.908s 00:17:03.478 user 0m16.462s 00:17:03.478 sys 0m0.586s 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:03.478 ************************************ 00:17:03.478 END TEST nvmf_vfio_user_nvme_compliance 00:17:03.478 ************************************ 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.478 ************************************ 00:17:03.478 START TEST nvmf_vfio_user_fuzz 00:17:03.478 ************************************ 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:03.478 * Looking for test storage... 00:17:03.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:17:03.478 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:03.759 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:03.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.760 --rc genhtml_branch_coverage=1 00:17:03.760 --rc genhtml_function_coverage=1 00:17:03.760 --rc genhtml_legend=1 00:17:03.760 --rc geninfo_all_blocks=1 00:17:03.760 --rc geninfo_unexecuted_blocks=1 00:17:03.760 00:17:03.760 ' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:03.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.760 --rc genhtml_branch_coverage=1 00:17:03.760 --rc genhtml_function_coverage=1 00:17:03.760 --rc genhtml_legend=1 00:17:03.760 --rc geninfo_all_blocks=1 00:17:03.760 --rc geninfo_unexecuted_blocks=1 00:17:03.760 00:17:03.760 ' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:03.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.760 --rc genhtml_branch_coverage=1 00:17:03.760 --rc genhtml_function_coverage=1 00:17:03.760 --rc genhtml_legend=1 00:17:03.760 --rc geninfo_all_blocks=1 00:17:03.760 --rc geninfo_unexecuted_blocks=1 00:17:03.760 00:17:03.760 ' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:03.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.760 --rc genhtml_branch_coverage=1 00:17:03.760 --rc genhtml_function_coverage=1 00:17:03.760 --rc genhtml_legend=1 00:17:03.760 --rc geninfo_all_blocks=1 00:17:03.760 --rc geninfo_unexecuted_blocks=1 00:17:03.760 00:17:03.760 ' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1788043 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1788043' 00:17:03.760 Process pid: 1788043 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1788043 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 1788043 ']' 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:03.760 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.020 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:04.020 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:17:04.020 13:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 malloc0 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.956 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:04.957 13:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:37.025 Fuzzing completed. Shutting down the fuzz application 00:17:37.025 00:17:37.025 Dumping successful admin opcodes: 00:17:37.025 8, 9, 10, 24, 00:17:37.025 Dumping successful io opcodes: 00:17:37.025 0, 00:17:37.025 NS: 0x200003a1ef00 I/O qp, Total commands completed: 642510, total successful commands: 2493, random_seed: 2403887552 00:17:37.025 NS: 0x200003a1ef00 admin qp, Total commands completed: 138893, total successful commands: 1127, random_seed: 3326220800 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 1788043 ']' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1788043' 00:17:37.026 killing process with pid 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 1788043 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:37.026 00:17:37.026 real 0m32.434s 00:17:37.026 user 0m30.542s 00:17:37.026 sys 0m28.819s 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:37.026 ************************************ 00:17:37.026 END TEST nvmf_vfio_user_fuzz 00:17:37.026 ************************************ 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.026 ************************************ 00:17:37.026 START TEST nvmf_auth_target 00:17:37.026 ************************************ 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:37.026 * Looking for test storage... 00:17:37.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.026 --rc genhtml_branch_coverage=1 00:17:37.026 --rc genhtml_function_coverage=1 00:17:37.026 --rc genhtml_legend=1 00:17:37.026 --rc geninfo_all_blocks=1 00:17:37.026 --rc geninfo_unexecuted_blocks=1 00:17:37.026 00:17:37.026 ' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.026 --rc genhtml_branch_coverage=1 00:17:37.026 --rc genhtml_function_coverage=1 00:17:37.026 --rc genhtml_legend=1 00:17:37.026 --rc geninfo_all_blocks=1 00:17:37.026 --rc geninfo_unexecuted_blocks=1 00:17:37.026 00:17:37.026 ' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.026 --rc genhtml_branch_coverage=1 00:17:37.026 --rc genhtml_function_coverage=1 00:17:37.026 --rc genhtml_legend=1 00:17:37.026 --rc geninfo_all_blocks=1 00:17:37.026 --rc geninfo_unexecuted_blocks=1 00:17:37.026 00:17:37.026 ' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:37.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.026 --rc genhtml_branch_coverage=1 00:17:37.026 --rc genhtml_function_coverage=1 00:17:37.026 --rc genhtml_legend=1 00:17:37.026 --rc geninfo_all_blocks=1 00:17:37.026 --rc geninfo_unexecuted_blocks=1 00:17:37.026 00:17:37.026 ' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.026 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # prepare_net_devs 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # local -g is_hw=no 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # remove_spdk_ns 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.027 13:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.406 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:17:38.407 Found 0000:09:00.0 (0x8086 - 0x1592) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:17:38.407 Found 0000:09:00.1 (0x8086 - 0x1592) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:38.407 Found net devices under 0000:09:00.0: cvl_0_0 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ up == up ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:38.407 Found net devices under 0000:09:00.1: cvl_0_1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # is_hw=yes 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:38.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:17:38.407 00:17:38.407 --- 10.0.0.2 ping statistics --- 00:17:38.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.407 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:38.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:17:38.407 00:17:38.407 --- 10.0.0.1 ping statistics --- 00:17:38.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.407 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # return 0 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1793246 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1793246 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1793246 ']' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.407 13:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1793269 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=null 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=ccdf640214e5bb774c112f5735cb166ece8fb4f1a2771181 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-null.XXX 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-null.3t4 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key ccdf640214e5bb774c112f5735cb166ece8fb4f1a2771181 0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 ccdf640214e5bb774c112f5735cb166ece8fb4f1a2771181 0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=ccdf640214e5bb774c112f5735cb166ece8fb4f1a2771181 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-null.3t4 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-null.3t4 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.3t4 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f37f037e47d79f492c3670f5b7514a9ce6d3cf598a896db26310016020bbb5a0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.88M 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f37f037e47d79f492c3670f5b7514a9ce6d3cf598a896db26310016020bbb5a0 3 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f37f037e47d79f492c3670f5b7514a9ce6d3cf598a896db26310016020bbb5a0 3 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f37f037e47d79f492c3670f5b7514a9ce6d3cf598a896db26310016020bbb5a0 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.88M 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.88M 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.88M 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:17:38.667 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=d2a76378711ea31a59b8ca7596aadc52 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.556 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key d2a76378711ea31a59b8ca7596aadc52 1 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 d2a76378711ea31a59b8ca7596aadc52 1 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=d2a76378711ea31a59b8ca7596aadc52 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:17:38.668 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.556 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.556 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.556 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=4a179fe9628052d92237161de8cb928a0cd03aef7392d287 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.p9j 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 4a179fe9628052d92237161de8cb928a0cd03aef7392d287 2 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 4a179fe9628052d92237161de8cb928a0cd03aef7392d287 2 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=4a179fe9628052d92237161de8cb928a0cd03aef7392d287 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.p9j 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.p9j 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.p9j 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha384 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=48 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=af1f295a8f8b39196c6c0b510e90d37b789757b3d6b185c2 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha384.XXX 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha384.Kc5 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key af1f295a8f8b39196c6c0b510e90d37b789757b3d6b185c2 2 00:17:38.926 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 af1f295a8f8b39196c6c0b510e90d37b789757b3d6b185c2 2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=af1f295a8f8b39196c6c0b510e90d37b789757b3d6b185c2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha384.Kc5 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha384.Kc5 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.Kc5 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha256 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=32 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=8eafc22fac61832775815bd67001fdd4 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha256.XXX 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha256.QtK 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key 8eafc22fac61832775815bd67001fdd4 1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 8eafc22fac61832775815bd67001fdd4 1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=8eafc22fac61832775815bd67001fdd4 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha256.QtK 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha256.QtK 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.QtK 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@749 -- # local digest len file key 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # local -A digests 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digest=sha512 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # len=64 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # key=f7f25f210e709a3b1a1eb769f1d4a2bc93d122dc5776099bc2cc885d6b1d1f82 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # mktemp -t spdk.key-sha512.XXX 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # file=/tmp/spdk.key-sha512.xV2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # format_dhchap_key f7f25f210e709a3b1a1eb769f1d4a2bc93d122dc5776099bc2cc885d6b1d1f82 3 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@745 -- # format_key DHHC-1 f7f25f210e709a3b1a1eb769f1d4a2bc93d122dc5776099bc2cc885d6b1d1f82 3 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # local prefix key digest 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # prefix=DHHC-1 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # key=f7f25f210e709a3b1a1eb769f1d4a2bc93d122dc5776099bc2cc885d6b1d1f82 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # digest=3 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@731 -- # python - 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # chmod 0600 /tmp/spdk.key-sha512.xV2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # echo /tmp/spdk.key-sha512.xV2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.xV2 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1793246 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1793246 ']' 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:38.927 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1793269 /var/tmp/host.sock 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1793269 ']' 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:39.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:39.185 13:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.443 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:39.443 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:39.443 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:39.443 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.443 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.703 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3t4 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.3t4 00:17:39.704 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.3t4 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.88M ]] 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.88M 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.88M 00:17:39.962 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.88M 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.556 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.556 00:17:40.219 13:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.556 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.p9j ]] 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p9j 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p9j 00:17:40.477 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p9j 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kc5 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.Kc5 00:17:40.735 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.Kc5 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.QtK ]] 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QtK 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QtK 00:17:40.994 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QtK 00:17:41.251 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xV2 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.xV2 00:17:41.252 13:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.xV2 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.509 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:41.766 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.767 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.025 00:17:42.025 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.025 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.025 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.284 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.284 { 00:17:42.284 "cntlid": 1, 00:17:42.284 "qid": 0, 00:17:42.284 "state": "enabled", 00:17:42.284 "thread": "nvmf_tgt_poll_group_000", 00:17:42.284 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:42.284 "listen_address": { 00:17:42.284 "trtype": "TCP", 00:17:42.284 "adrfam": "IPv4", 00:17:42.284 "traddr": "10.0.0.2", 00:17:42.284 "trsvcid": "4420" 00:17:42.284 }, 00:17:42.284 "peer_address": { 00:17:42.284 "trtype": "TCP", 00:17:42.284 "adrfam": "IPv4", 00:17:42.284 "traddr": "10.0.0.1", 00:17:42.284 "trsvcid": "57532" 00:17:42.284 }, 00:17:42.284 "auth": { 00:17:42.284 "state": "completed", 00:17:42.284 "digest": "sha256", 00:17:42.284 "dhgroup": "null" 00:17:42.284 } 00:17:42.284 } 00:17:42.284 ]' 00:17:42.542 13:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.542 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.800 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:17:42.800 13:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.733 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.991 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.249 00:17:44.249 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.249 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.249 13:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.507 { 00:17:44.507 "cntlid": 3, 00:17:44.507 "qid": 0, 00:17:44.507 "state": "enabled", 00:17:44.507 "thread": "nvmf_tgt_poll_group_000", 00:17:44.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:44.507 "listen_address": { 00:17:44.507 "trtype": "TCP", 00:17:44.507 "adrfam": "IPv4", 00:17:44.507 "traddr": "10.0.0.2", 00:17:44.507 "trsvcid": "4420" 00:17:44.507 }, 00:17:44.507 "peer_address": { 00:17:44.507 "trtype": "TCP", 00:17:44.507 "adrfam": "IPv4", 00:17:44.507 "traddr": "10.0.0.1", 00:17:44.507 "trsvcid": "35248" 00:17:44.507 }, 00:17:44.507 "auth": { 00:17:44.507 "state": "completed", 00:17:44.507 "digest": "sha256", 00:17:44.507 "dhgroup": "null" 00:17:44.507 } 00:17:44.507 } 00:17:44.507 ]' 00:17:44.507 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.765 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.022 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:17:45.022 13:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:45.959 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.217 13:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.475 00:17:46.475 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.475 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.475 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.732 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.733 { 00:17:46.733 "cntlid": 5, 00:17:46.733 "qid": 0, 00:17:46.733 "state": "enabled", 00:17:46.733 "thread": "nvmf_tgt_poll_group_000", 00:17:46.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:46.733 "listen_address": { 00:17:46.733 "trtype": "TCP", 00:17:46.733 "adrfam": "IPv4", 00:17:46.733 "traddr": "10.0.0.2", 00:17:46.733 "trsvcid": "4420" 00:17:46.733 }, 00:17:46.733 "peer_address": { 00:17:46.733 "trtype": "TCP", 00:17:46.733 "adrfam": "IPv4", 00:17:46.733 "traddr": "10.0.0.1", 00:17:46.733 "trsvcid": "35264" 00:17:46.733 }, 00:17:46.733 "auth": { 00:17:46.733 "state": "completed", 00:17:46.733 "digest": "sha256", 00:17:46.733 "dhgroup": "null" 00:17:46.733 } 00:17:46.733 } 00:17:46.733 ]' 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.733 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.991 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.991 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.991 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.250 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:17:47.250 13:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.186 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.445 13:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.703 00:17:48.703 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.703 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.703 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.961 { 00:17:48.961 "cntlid": 7, 00:17:48.961 "qid": 0, 00:17:48.961 "state": "enabled", 00:17:48.961 "thread": "nvmf_tgt_poll_group_000", 00:17:48.961 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:48.961 "listen_address": { 00:17:48.961 "trtype": "TCP", 00:17:48.961 "adrfam": "IPv4", 00:17:48.961 "traddr": "10.0.0.2", 00:17:48.961 "trsvcid": "4420" 00:17:48.961 }, 00:17:48.961 "peer_address": { 00:17:48.961 "trtype": "TCP", 00:17:48.961 "adrfam": "IPv4", 00:17:48.961 "traddr": "10.0.0.1", 00:17:48.961 "trsvcid": "35294" 00:17:48.961 }, 00:17:48.961 "auth": { 00:17:48.961 "state": "completed", 00:17:48.961 "digest": "sha256", 00:17:48.961 "dhgroup": "null" 00:17:48.961 } 00:17:48.961 } 00:17:48.961 ]' 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.961 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.530 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:17:49.530 13:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:17:50.095 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.095 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:50.095 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.095 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.355 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.355 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:50.355 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.355 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.355 13:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.613 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:50.871 00:17:50.871 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.871 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.871 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.129 { 00:17:51.129 "cntlid": 9, 00:17:51.129 "qid": 0, 00:17:51.129 "state": "enabled", 00:17:51.129 "thread": "nvmf_tgt_poll_group_000", 00:17:51.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:51.129 "listen_address": { 00:17:51.129 "trtype": "TCP", 00:17:51.129 "adrfam": "IPv4", 00:17:51.129 "traddr": "10.0.0.2", 00:17:51.129 "trsvcid": "4420" 00:17:51.129 }, 00:17:51.129 "peer_address": { 00:17:51.129 "trtype": "TCP", 00:17:51.129 "adrfam": "IPv4", 00:17:51.129 "traddr": "10.0.0.1", 00:17:51.129 "trsvcid": "35318" 00:17:51.129 }, 00:17:51.129 "auth": { 00:17:51.129 "state": "completed", 00:17:51.129 "digest": "sha256", 00:17:51.129 "dhgroup": "ffdhe2048" 00:17:51.129 } 00:17:51.129 } 00:17:51.129 ]' 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.129 13:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.387 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:17:51.387 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:17:52.323 13:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.323 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:52.892 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.150 00:17:53.150 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:53.150 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:53.150 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.408 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.408 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.409 { 00:17:53.409 "cntlid": 11, 00:17:53.409 "qid": 0, 00:17:53.409 "state": "enabled", 00:17:53.409 "thread": "nvmf_tgt_poll_group_000", 00:17:53.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:53.409 "listen_address": { 00:17:53.409 "trtype": "TCP", 00:17:53.409 "adrfam": "IPv4", 00:17:53.409 "traddr": "10.0.0.2", 00:17:53.409 "trsvcid": "4420" 00:17:53.409 }, 00:17:53.409 "peer_address": { 00:17:53.409 "trtype": "TCP", 00:17:53.409 "adrfam": "IPv4", 00:17:53.409 "traddr": "10.0.0.1", 00:17:53.409 "trsvcid": "35356" 00:17:53.409 }, 00:17:53.409 "auth": { 00:17:53.409 "state": "completed", 00:17:53.409 "digest": "sha256", 00:17:53.409 "dhgroup": "ffdhe2048" 00:17:53.409 } 00:17:53.409 } 00:17:53.409 ]' 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.409 13:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.409 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.409 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.409 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.409 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.409 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.669 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:17:53.669 13:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.605 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:54.864 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.431 00:17:55.431 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.431 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.431 13:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.689 { 00:17:55.689 "cntlid": 13, 00:17:55.689 "qid": 0, 00:17:55.689 "state": "enabled", 00:17:55.689 "thread": "nvmf_tgt_poll_group_000", 00:17:55.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:55.689 "listen_address": { 00:17:55.689 "trtype": "TCP", 00:17:55.689 "adrfam": "IPv4", 00:17:55.689 "traddr": "10.0.0.2", 00:17:55.689 "trsvcid": "4420" 00:17:55.689 }, 00:17:55.689 "peer_address": { 00:17:55.689 "trtype": "TCP", 00:17:55.689 "adrfam": "IPv4", 00:17:55.689 "traddr": "10.0.0.1", 00:17:55.689 "trsvcid": "45142" 00:17:55.689 }, 00:17:55.689 "auth": { 00:17:55.689 "state": "completed", 00:17:55.689 "digest": "sha256", 00:17:55.689 "dhgroup": "ffdhe2048" 00:17:55.689 } 00:17:55.689 } 00:17:55.689 ]' 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.689 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.947 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:17:55.947 13:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:56.884 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.142 13:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.399 00:17:57.399 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.399 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.399 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.966 { 00:17:57.966 "cntlid": 15, 00:17:57.966 "qid": 0, 00:17:57.966 "state": "enabled", 00:17:57.966 "thread": "nvmf_tgt_poll_group_000", 00:17:57.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:17:57.966 "listen_address": { 00:17:57.966 "trtype": "TCP", 00:17:57.966 "adrfam": "IPv4", 00:17:57.966 "traddr": "10.0.0.2", 00:17:57.966 "trsvcid": "4420" 00:17:57.966 }, 00:17:57.966 "peer_address": { 00:17:57.966 "trtype": "TCP", 00:17:57.966 "adrfam": "IPv4", 00:17:57.966 "traddr": "10.0.0.1", 00:17:57.966 "trsvcid": "45164" 00:17:57.966 }, 00:17:57.966 "auth": { 00:17:57.966 "state": "completed", 00:17:57.966 "digest": "sha256", 00:17:57.966 "dhgroup": "ffdhe2048" 00:17:57.966 } 00:17:57.966 } 00:17:57.966 ]' 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.966 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.967 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.967 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.967 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.967 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.226 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:17:58.226 13:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.165 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.165 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.468 13:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.752 00:17:59.752 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.752 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.752 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.009 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:00.009 { 00:18:00.009 "cntlid": 17, 00:18:00.009 "qid": 0, 00:18:00.009 "state": "enabled", 00:18:00.009 "thread": "nvmf_tgt_poll_group_000", 00:18:00.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:00.010 "listen_address": { 00:18:00.010 "trtype": "TCP", 00:18:00.010 "adrfam": "IPv4", 00:18:00.010 "traddr": "10.0.0.2", 00:18:00.010 "trsvcid": "4420" 00:18:00.010 }, 00:18:00.010 "peer_address": { 00:18:00.010 "trtype": "TCP", 00:18:00.010 "adrfam": "IPv4", 00:18:00.010 "traddr": "10.0.0.1", 00:18:00.010 "trsvcid": "45174" 00:18:00.010 }, 00:18:00.010 "auth": { 00:18:00.010 "state": "completed", 00:18:00.010 "digest": "sha256", 00:18:00.010 "dhgroup": "ffdhe3072" 00:18:00.010 } 00:18:00.010 } 00:18:00.010 ]' 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.010 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.268 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:00.268 13:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.203 13:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.463 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.031 00:18:02.031 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.031 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.031 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.289 { 00:18:02.289 "cntlid": 19, 00:18:02.289 "qid": 0, 00:18:02.289 "state": "enabled", 00:18:02.289 "thread": "nvmf_tgt_poll_group_000", 00:18:02.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:02.289 "listen_address": { 00:18:02.289 "trtype": "TCP", 00:18:02.289 "adrfam": "IPv4", 00:18:02.289 "traddr": "10.0.0.2", 00:18:02.289 "trsvcid": "4420" 00:18:02.289 }, 00:18:02.289 "peer_address": { 00:18:02.289 "trtype": "TCP", 00:18:02.289 "adrfam": "IPv4", 00:18:02.289 "traddr": "10.0.0.1", 00:18:02.289 "trsvcid": "45216" 00:18:02.289 }, 00:18:02.289 "auth": { 00:18:02.289 "state": "completed", 00:18:02.289 "digest": "sha256", 00:18:02.289 "dhgroup": "ffdhe3072" 00:18:02.289 } 00:18:02.289 } 00:18:02.289 ]' 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.289 13:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.546 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:02.547 13:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.482 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:03.740 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.309 00:18:04.309 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.309 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.309 13:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.309 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.309 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.309 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.309 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.568 { 00:18:04.568 "cntlid": 21, 00:18:04.568 "qid": 0, 00:18:04.568 "state": "enabled", 00:18:04.568 "thread": "nvmf_tgt_poll_group_000", 00:18:04.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:04.568 "listen_address": { 00:18:04.568 "trtype": "TCP", 00:18:04.568 "adrfam": "IPv4", 00:18:04.568 "traddr": "10.0.0.2", 00:18:04.568 "trsvcid": "4420" 00:18:04.568 }, 00:18:04.568 "peer_address": { 00:18:04.568 "trtype": "TCP", 00:18:04.568 "adrfam": "IPv4", 00:18:04.568 "traddr": "10.0.0.1", 00:18:04.568 "trsvcid": "45978" 00:18:04.568 }, 00:18:04.568 "auth": { 00:18:04.568 "state": "completed", 00:18:04.568 "digest": "sha256", 00:18:04.568 "dhgroup": "ffdhe3072" 00:18:04.568 } 00:18:04.568 } 00:18:04.568 ]' 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.568 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.826 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:04.826 13:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:05.764 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.022 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.281 00:18:06.281 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.281 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.281 13:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.539 { 00:18:06.539 "cntlid": 23, 00:18:06.539 "qid": 0, 00:18:06.539 "state": "enabled", 00:18:06.539 "thread": "nvmf_tgt_poll_group_000", 00:18:06.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:06.539 "listen_address": { 00:18:06.539 "trtype": "TCP", 00:18:06.539 "adrfam": "IPv4", 00:18:06.539 "traddr": "10.0.0.2", 00:18:06.539 "trsvcid": "4420" 00:18:06.539 }, 00:18:06.539 "peer_address": { 00:18:06.539 "trtype": "TCP", 00:18:06.539 "adrfam": "IPv4", 00:18:06.539 "traddr": "10.0.0.1", 00:18:06.539 "trsvcid": "45992" 00:18:06.539 }, 00:18:06.539 "auth": { 00:18:06.539 "state": "completed", 00:18:06.539 "digest": "sha256", 00:18:06.539 "dhgroup": "ffdhe3072" 00:18:06.539 } 00:18:06.539 } 00:18:06.539 ]' 00:18:06.539 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.796 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.054 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:07.054 13:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:07.988 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.247 13:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.504 00:18:08.504 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.504 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.504 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.763 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.763 { 00:18:08.763 "cntlid": 25, 00:18:08.763 "qid": 0, 00:18:08.763 "state": "enabled", 00:18:08.763 "thread": "nvmf_tgt_poll_group_000", 00:18:08.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:08.763 "listen_address": { 00:18:08.779 "trtype": "TCP", 00:18:08.779 "adrfam": "IPv4", 00:18:08.779 "traddr": "10.0.0.2", 00:18:08.779 "trsvcid": "4420" 00:18:08.779 }, 00:18:08.779 "peer_address": { 00:18:08.779 "trtype": "TCP", 00:18:08.779 "adrfam": "IPv4", 00:18:08.779 "traddr": "10.0.0.1", 00:18:08.779 "trsvcid": "46038" 00:18:08.779 }, 00:18:08.779 "auth": { 00:18:08.779 "state": "completed", 00:18:08.779 "digest": "sha256", 00:18:08.779 "dhgroup": "ffdhe4096" 00:18:08.779 } 00:18:08.779 } 00:18:08.779 ]' 00:18:08.779 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.038 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.296 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:09.296 13:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.232 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.490 13:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.490 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.490 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.490 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.490 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:10.748 00:18:10.748 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.748 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.748 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.006 { 00:18:11.006 "cntlid": 27, 00:18:11.006 "qid": 0, 00:18:11.006 "state": "enabled", 00:18:11.006 "thread": "nvmf_tgt_poll_group_000", 00:18:11.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:11.006 "listen_address": { 00:18:11.006 "trtype": "TCP", 00:18:11.006 "adrfam": "IPv4", 00:18:11.006 "traddr": "10.0.0.2", 00:18:11.006 "trsvcid": "4420" 00:18:11.006 }, 00:18:11.006 "peer_address": { 00:18:11.006 "trtype": "TCP", 00:18:11.006 "adrfam": "IPv4", 00:18:11.006 "traddr": "10.0.0.1", 00:18:11.006 "trsvcid": "46072" 00:18:11.006 }, 00:18:11.006 "auth": { 00:18:11.006 "state": "completed", 00:18:11.006 "digest": "sha256", 00:18:11.006 "dhgroup": "ffdhe4096" 00:18:11.006 } 00:18:11.006 } 00:18:11.006 ]' 00:18:11.006 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.284 13:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.542 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:11.542 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.479 13:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:12.736 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:12.736 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.736 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:12.737 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.000 00:18:13.000 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.000 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.000 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.259 { 00:18:13.259 "cntlid": 29, 00:18:13.259 "qid": 0, 00:18:13.259 "state": "enabled", 00:18:13.259 "thread": "nvmf_tgt_poll_group_000", 00:18:13.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:13.259 "listen_address": { 00:18:13.259 "trtype": "TCP", 00:18:13.259 "adrfam": "IPv4", 00:18:13.259 "traddr": "10.0.0.2", 00:18:13.259 "trsvcid": "4420" 00:18:13.259 }, 00:18:13.259 "peer_address": { 00:18:13.259 "trtype": "TCP", 00:18:13.259 "adrfam": "IPv4", 00:18:13.259 "traddr": "10.0.0.1", 00:18:13.259 "trsvcid": "46108" 00:18:13.259 }, 00:18:13.259 "auth": { 00:18:13.259 "state": "completed", 00:18:13.259 "digest": "sha256", 00:18:13.259 "dhgroup": "ffdhe4096" 00:18:13.259 } 00:18:13.259 } 00:18:13.259 ]' 00:18:13.259 13:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.517 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.775 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:13.775 13:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.712 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:14.971 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:15.229 00:18:15.489 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.489 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.489 13:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.747 { 00:18:15.747 "cntlid": 31, 00:18:15.747 "qid": 0, 00:18:15.747 "state": "enabled", 00:18:15.747 "thread": "nvmf_tgt_poll_group_000", 00:18:15.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:15.747 "listen_address": { 00:18:15.747 "trtype": "TCP", 00:18:15.747 "adrfam": "IPv4", 00:18:15.747 "traddr": "10.0.0.2", 00:18:15.747 "trsvcid": "4420" 00:18:15.747 }, 00:18:15.747 "peer_address": { 00:18:15.747 "trtype": "TCP", 00:18:15.747 "adrfam": "IPv4", 00:18:15.747 "traddr": "10.0.0.1", 00:18:15.747 "trsvcid": "35428" 00:18:15.747 }, 00:18:15.747 "auth": { 00:18:15.747 "state": "completed", 00:18:15.747 "digest": "sha256", 00:18:15.747 "dhgroup": "ffdhe4096" 00:18:15.747 } 00:18:15.747 } 00:18:15.747 ]' 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.747 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.004 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:16.004 13:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:16.939 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.197 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.456 13:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.024 00:18:18.024 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.024 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.024 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.282 { 00:18:18.282 "cntlid": 33, 00:18:18.282 "qid": 0, 00:18:18.282 "state": "enabled", 00:18:18.282 "thread": "nvmf_tgt_poll_group_000", 00:18:18.282 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:18.282 "listen_address": { 00:18:18.282 "trtype": "TCP", 00:18:18.282 "adrfam": "IPv4", 00:18:18.282 "traddr": "10.0.0.2", 00:18:18.282 "trsvcid": "4420" 00:18:18.282 }, 00:18:18.282 "peer_address": { 00:18:18.282 "trtype": "TCP", 00:18:18.282 "adrfam": "IPv4", 00:18:18.282 "traddr": "10.0.0.1", 00:18:18.282 "trsvcid": "35456" 00:18:18.282 }, 00:18:18.282 "auth": { 00:18:18.282 "state": "completed", 00:18:18.282 "digest": "sha256", 00:18:18.282 "dhgroup": "ffdhe6144" 00:18:18.282 } 00:18:18.282 } 00:18:18.282 ]' 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.282 13:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.540 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:18.540 13:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.479 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.479 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.737 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.304 00:18:20.304 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.304 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.304 13:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.562 { 00:18:20.562 "cntlid": 35, 00:18:20.562 "qid": 0, 00:18:20.562 "state": "enabled", 00:18:20.562 "thread": "nvmf_tgt_poll_group_000", 00:18:20.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:20.562 "listen_address": { 00:18:20.562 "trtype": "TCP", 00:18:20.562 "adrfam": "IPv4", 00:18:20.562 "traddr": "10.0.0.2", 00:18:20.562 "trsvcid": "4420" 00:18:20.562 }, 00:18:20.562 "peer_address": { 00:18:20.562 "trtype": "TCP", 00:18:20.562 "adrfam": "IPv4", 00:18:20.562 "traddr": "10.0.0.1", 00:18:20.562 "trsvcid": "35472" 00:18:20.562 }, 00:18:20.562 "auth": { 00:18:20.562 "state": "completed", 00:18:20.562 "digest": "sha256", 00:18:20.562 "dhgroup": "ffdhe6144" 00:18:20.562 } 00:18:20.562 } 00:18:20.562 ]' 00:18:20.562 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.819 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.819 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.819 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:20.820 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.820 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.820 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.820 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.077 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:21.078 13:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.014 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.272 13:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.838 00:18:23.097 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.097 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.097 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.355 { 00:18:23.355 "cntlid": 37, 00:18:23.355 "qid": 0, 00:18:23.355 "state": "enabled", 00:18:23.355 "thread": "nvmf_tgt_poll_group_000", 00:18:23.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:23.355 "listen_address": { 00:18:23.355 "trtype": "TCP", 00:18:23.355 "adrfam": "IPv4", 00:18:23.355 "traddr": "10.0.0.2", 00:18:23.355 "trsvcid": "4420" 00:18:23.355 }, 00:18:23.355 "peer_address": { 00:18:23.355 "trtype": "TCP", 00:18:23.355 "adrfam": "IPv4", 00:18:23.355 "traddr": "10.0.0.1", 00:18:23.355 "trsvcid": "35502" 00:18:23.355 }, 00:18:23.355 "auth": { 00:18:23.355 "state": "completed", 00:18:23.355 "digest": "sha256", 00:18:23.355 "dhgroup": "ffdhe6144" 00:18:23.355 } 00:18:23.355 } 00:18:23.355 ]' 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.355 13:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.615 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:23.615 13:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.551 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:24.808 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:24.808 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.808 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.809 13:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.376 00:18:25.376 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.376 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.376 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.635 { 00:18:25.635 "cntlid": 39, 00:18:25.635 "qid": 0, 00:18:25.635 "state": "enabled", 00:18:25.635 "thread": "nvmf_tgt_poll_group_000", 00:18:25.635 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:25.635 "listen_address": { 00:18:25.635 "trtype": "TCP", 00:18:25.635 "adrfam": "IPv4", 00:18:25.635 "traddr": "10.0.0.2", 00:18:25.635 "trsvcid": "4420" 00:18:25.635 }, 00:18:25.635 "peer_address": { 00:18:25.635 "trtype": "TCP", 00:18:25.635 "adrfam": "IPv4", 00:18:25.635 "traddr": "10.0.0.1", 00:18:25.635 "trsvcid": "35864" 00:18:25.635 }, 00:18:25.635 "auth": { 00:18:25.635 "state": "completed", 00:18:25.635 "digest": "sha256", 00:18:25.635 "dhgroup": "ffdhe6144" 00:18:25.635 } 00:18:25.635 } 00:18:25.635 ]' 00:18:25.635 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.893 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.151 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:26.151 13:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.086 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:27.344 13:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.279 00:18:28.279 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.279 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.279 13:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.537 { 00:18:28.537 "cntlid": 41, 00:18:28.537 "qid": 0, 00:18:28.537 "state": "enabled", 00:18:28.537 "thread": "nvmf_tgt_poll_group_000", 00:18:28.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:28.537 "listen_address": { 00:18:28.537 "trtype": "TCP", 00:18:28.537 "adrfam": "IPv4", 00:18:28.537 "traddr": "10.0.0.2", 00:18:28.537 "trsvcid": "4420" 00:18:28.537 }, 00:18:28.537 "peer_address": { 00:18:28.537 "trtype": "TCP", 00:18:28.537 "adrfam": "IPv4", 00:18:28.537 "traddr": "10.0.0.1", 00:18:28.537 "trsvcid": "35890" 00:18:28.537 }, 00:18:28.537 "auth": { 00:18:28.537 "state": "completed", 00:18:28.537 "digest": "sha256", 00:18:28.537 "dhgroup": "ffdhe8192" 00:18:28.537 } 00:18:28.537 } 00:18:28.537 ]' 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.537 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.795 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:28.795 13:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:29.752 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.752 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:29.752 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.752 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.753 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.753 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.753 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:29.753 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.028 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.029 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.029 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.029 13:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.966 00:18:30.966 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.966 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.966 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.223 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.223 { 00:18:31.224 "cntlid": 43, 00:18:31.224 "qid": 0, 00:18:31.224 "state": "enabled", 00:18:31.224 "thread": "nvmf_tgt_poll_group_000", 00:18:31.224 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:31.224 "listen_address": { 00:18:31.224 "trtype": "TCP", 00:18:31.224 "adrfam": "IPv4", 00:18:31.224 "traddr": "10.0.0.2", 00:18:31.224 "trsvcid": "4420" 00:18:31.224 }, 00:18:31.224 "peer_address": { 00:18:31.224 "trtype": "TCP", 00:18:31.224 "adrfam": "IPv4", 00:18:31.224 "traddr": "10.0.0.1", 00:18:31.224 "trsvcid": "35918" 00:18:31.224 }, 00:18:31.224 "auth": { 00:18:31.224 "state": "completed", 00:18:31.224 "digest": "sha256", 00:18:31.224 "dhgroup": "ffdhe8192" 00:18:31.224 } 00:18:31.224 } 00:18:31.224 ]' 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.224 13:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.791 13:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:31.791 13:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:32.375 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.633 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.889 13:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.826 00:18:33.826 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.826 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.826 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.083 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.083 { 00:18:34.083 "cntlid": 45, 00:18:34.083 "qid": 0, 00:18:34.083 "state": "enabled", 00:18:34.083 "thread": "nvmf_tgt_poll_group_000", 00:18:34.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:34.083 "listen_address": { 00:18:34.083 "trtype": "TCP", 00:18:34.083 "adrfam": "IPv4", 00:18:34.083 "traddr": "10.0.0.2", 00:18:34.084 "trsvcid": "4420" 00:18:34.084 }, 00:18:34.084 "peer_address": { 00:18:34.084 "trtype": "TCP", 00:18:34.084 "adrfam": "IPv4", 00:18:34.084 "traddr": "10.0.0.1", 00:18:34.084 "trsvcid": "35944" 00:18:34.084 }, 00:18:34.084 "auth": { 00:18:34.084 "state": "completed", 00:18:34.084 "digest": "sha256", 00:18:34.084 "dhgroup": "ffdhe8192" 00:18:34.084 } 00:18:34.084 } 00:18:34.084 ]' 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.084 13:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.649 13:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:34.649 13:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:35.582 13:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.582 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:35.856 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.857 13:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.791 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.791 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.791 { 00:18:36.791 "cntlid": 47, 00:18:36.791 "qid": 0, 00:18:36.791 "state": "enabled", 00:18:36.791 "thread": "nvmf_tgt_poll_group_000", 00:18:36.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:36.791 "listen_address": { 00:18:36.791 "trtype": "TCP", 00:18:36.791 "adrfam": "IPv4", 00:18:36.791 "traddr": "10.0.0.2", 00:18:36.791 "trsvcid": "4420" 00:18:36.791 }, 00:18:36.791 "peer_address": { 00:18:36.791 "trtype": "TCP", 00:18:36.791 "adrfam": "IPv4", 00:18:36.791 "traddr": "10.0.0.1", 00:18:36.791 "trsvcid": "44558" 00:18:36.791 }, 00:18:36.791 "auth": { 00:18:36.791 "state": "completed", 00:18:36.791 "digest": "sha256", 00:18:36.791 "dhgroup": "ffdhe8192" 00:18:36.791 } 00:18:36.792 } 00:18:36.792 ]' 00:18:36.792 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.792 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.792 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.050 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.050 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.050 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.050 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.050 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.307 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:37.307 13:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.242 13:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.500 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.756 00:18:38.756 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.756 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.756 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.013 { 00:18:39.013 "cntlid": 49, 00:18:39.013 "qid": 0, 00:18:39.013 "state": "enabled", 00:18:39.013 "thread": "nvmf_tgt_poll_group_000", 00:18:39.013 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:39.013 "listen_address": { 00:18:39.013 "trtype": "TCP", 00:18:39.013 "adrfam": "IPv4", 00:18:39.013 "traddr": "10.0.0.2", 00:18:39.013 "trsvcid": "4420" 00:18:39.013 }, 00:18:39.013 "peer_address": { 00:18:39.013 "trtype": "TCP", 00:18:39.013 "adrfam": "IPv4", 00:18:39.013 "traddr": "10.0.0.1", 00:18:39.013 "trsvcid": "44588" 00:18:39.013 }, 00:18:39.013 "auth": { 00:18:39.013 "state": "completed", 00:18:39.013 "digest": "sha384", 00:18:39.013 "dhgroup": "null" 00:18:39.013 } 00:18:39.013 } 00:18:39.013 ]' 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:39.013 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.271 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:39.271 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.271 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.271 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.271 13:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.529 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:39.529 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:40.461 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.461 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:40.461 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.461 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.461 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.462 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.462 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.462 13:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.719 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.978 00:18:40.978 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.978 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.978 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.236 { 00:18:41.236 "cntlid": 51, 00:18:41.236 "qid": 0, 00:18:41.236 "state": "enabled", 00:18:41.236 "thread": "nvmf_tgt_poll_group_000", 00:18:41.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:41.236 "listen_address": { 00:18:41.236 "trtype": "TCP", 00:18:41.236 "adrfam": "IPv4", 00:18:41.236 "traddr": "10.0.0.2", 00:18:41.236 "trsvcid": "4420" 00:18:41.236 }, 00:18:41.236 "peer_address": { 00:18:41.236 "trtype": "TCP", 00:18:41.236 "adrfam": "IPv4", 00:18:41.236 "traddr": "10.0.0.1", 00:18:41.236 "trsvcid": "44618" 00:18:41.236 }, 00:18:41.236 "auth": { 00:18:41.236 "state": "completed", 00:18:41.236 "digest": "sha384", 00:18:41.236 "dhgroup": "null" 00:18:41.236 } 00:18:41.236 } 00:18:41.236 ]' 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.236 13:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.802 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:41.802 13:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.739 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:42.996 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.997 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.254 00:18:43.254 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.254 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.254 13:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.511 { 00:18:43.511 "cntlid": 53, 00:18:43.511 "qid": 0, 00:18:43.511 "state": "enabled", 00:18:43.511 "thread": "nvmf_tgt_poll_group_000", 00:18:43.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:43.511 "listen_address": { 00:18:43.511 "trtype": "TCP", 00:18:43.511 "adrfam": "IPv4", 00:18:43.511 "traddr": "10.0.0.2", 00:18:43.511 "trsvcid": "4420" 00:18:43.511 }, 00:18:43.511 "peer_address": { 00:18:43.511 "trtype": "TCP", 00:18:43.511 "adrfam": "IPv4", 00:18:43.511 "traddr": "10.0.0.1", 00:18:43.511 "trsvcid": "44646" 00:18:43.511 }, 00:18:43.511 "auth": { 00:18:43.511 "state": "completed", 00:18:43.511 "digest": "sha384", 00:18:43.511 "dhgroup": "null" 00:18:43.511 } 00:18:43.511 } 00:18:43.511 ]' 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:43.511 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.769 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:43.769 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.769 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.769 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.769 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.026 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:44.026 13:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:44.959 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.959 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:44.960 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.217 13:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.474 00:18:45.474 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.474 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.474 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.731 { 00:18:45.731 "cntlid": 55, 00:18:45.731 "qid": 0, 00:18:45.731 "state": "enabled", 00:18:45.731 "thread": "nvmf_tgt_poll_group_000", 00:18:45.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:45.731 "listen_address": { 00:18:45.731 "trtype": "TCP", 00:18:45.731 "adrfam": "IPv4", 00:18:45.731 "traddr": "10.0.0.2", 00:18:45.731 "trsvcid": "4420" 00:18:45.731 }, 00:18:45.731 "peer_address": { 00:18:45.731 "trtype": "TCP", 00:18:45.731 "adrfam": "IPv4", 00:18:45.731 "traddr": "10.0.0.1", 00:18:45.731 "trsvcid": "58474" 00:18:45.731 }, 00:18:45.731 "auth": { 00:18:45.731 "state": "completed", 00:18:45.731 "digest": "sha384", 00:18:45.731 "dhgroup": "null" 00:18:45.731 } 00:18:45.731 } 00:18:45.731 ]' 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:45.731 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.989 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:45.989 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.989 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.989 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.989 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.248 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:46.248 13:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.180 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.180 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.453 13:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.711 00:18:47.711 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.711 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.711 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.968 { 00:18:47.968 "cntlid": 57, 00:18:47.968 "qid": 0, 00:18:47.968 "state": "enabled", 00:18:47.968 "thread": "nvmf_tgt_poll_group_000", 00:18:47.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:47.968 "listen_address": { 00:18:47.968 "trtype": "TCP", 00:18:47.968 "adrfam": "IPv4", 00:18:47.968 "traddr": "10.0.0.2", 00:18:47.968 "trsvcid": "4420" 00:18:47.968 }, 00:18:47.968 "peer_address": { 00:18:47.968 "trtype": "TCP", 00:18:47.968 "adrfam": "IPv4", 00:18:47.968 "traddr": "10.0.0.1", 00:18:47.968 "trsvcid": "58504" 00:18:47.968 }, 00:18:47.968 "auth": { 00:18:47.968 "state": "completed", 00:18:47.968 "digest": "sha384", 00:18:47.968 "dhgroup": "ffdhe2048" 00:18:47.968 } 00:18:47.968 } 00:18:47.968 ]' 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:47.968 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.225 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.225 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.225 13:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.489 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:48.489 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.426 13:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.683 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:49.940 00:18:49.940 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:49.940 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.940 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.198 { 00:18:50.198 "cntlid": 59, 00:18:50.198 "qid": 0, 00:18:50.198 "state": "enabled", 00:18:50.198 "thread": "nvmf_tgt_poll_group_000", 00:18:50.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:50.198 "listen_address": { 00:18:50.198 "trtype": "TCP", 00:18:50.198 "adrfam": "IPv4", 00:18:50.198 "traddr": "10.0.0.2", 00:18:50.198 "trsvcid": "4420" 00:18:50.198 }, 00:18:50.198 "peer_address": { 00:18:50.198 "trtype": "TCP", 00:18:50.198 "adrfam": "IPv4", 00:18:50.198 "traddr": "10.0.0.1", 00:18:50.198 "trsvcid": "58522" 00:18:50.198 }, 00:18:50.198 "auth": { 00:18:50.198 "state": "completed", 00:18:50.198 "digest": "sha384", 00:18:50.198 "dhgroup": "ffdhe2048" 00:18:50.198 } 00:18:50.198 } 00:18:50.198 ]' 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.198 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.456 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.456 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.456 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.456 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.456 13:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.714 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:50.714 13:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.648 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:51.906 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:52.163 00:18:52.163 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.163 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.163 13:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.420 { 00:18:52.420 "cntlid": 61, 00:18:52.420 "qid": 0, 00:18:52.420 "state": "enabled", 00:18:52.420 "thread": "nvmf_tgt_poll_group_000", 00:18:52.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:52.420 "listen_address": { 00:18:52.420 "trtype": "TCP", 00:18:52.420 "adrfam": "IPv4", 00:18:52.420 "traddr": "10.0.0.2", 00:18:52.420 "trsvcid": "4420" 00:18:52.420 }, 00:18:52.420 "peer_address": { 00:18:52.420 "trtype": "TCP", 00:18:52.420 "adrfam": "IPv4", 00:18:52.420 "traddr": "10.0.0.1", 00:18:52.420 "trsvcid": "58554" 00:18:52.420 }, 00:18:52.420 "auth": { 00:18:52.420 "state": "completed", 00:18:52.420 "digest": "sha384", 00:18:52.420 "dhgroup": "ffdhe2048" 00:18:52.420 } 00:18:52.420 } 00:18:52.420 ]' 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.420 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.678 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.678 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.678 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.678 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.678 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.935 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:52.936 13:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.870 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.870 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.128 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:54.385 00:18:54.385 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.385 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.385 13:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.643 { 00:18:54.643 "cntlid": 63, 00:18:54.643 "qid": 0, 00:18:54.643 "state": "enabled", 00:18:54.643 "thread": "nvmf_tgt_poll_group_000", 00:18:54.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:54.643 "listen_address": { 00:18:54.643 "trtype": "TCP", 00:18:54.643 "adrfam": "IPv4", 00:18:54.643 "traddr": "10.0.0.2", 00:18:54.643 "trsvcid": "4420" 00:18:54.643 }, 00:18:54.643 "peer_address": { 00:18:54.643 "trtype": "TCP", 00:18:54.643 "adrfam": "IPv4", 00:18:54.643 "traddr": "10.0.0.1", 00:18:54.643 "trsvcid": "52914" 00:18:54.643 }, 00:18:54.643 "auth": { 00:18:54.643 "state": "completed", 00:18:54.643 "digest": "sha384", 00:18:54.643 "dhgroup": "ffdhe2048" 00:18:54.643 } 00:18:54.643 } 00:18:54.643 ]' 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.643 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:54.644 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.644 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:54.644 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.902 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.902 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.902 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.161 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:55.161 13:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.096 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:56.096 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:56.354 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:56.354 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.355 13:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:56.612 00:18:56.612 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.612 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.613 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.869 { 00:18:56.869 "cntlid": 65, 00:18:56.869 "qid": 0, 00:18:56.869 "state": "enabled", 00:18:56.869 "thread": "nvmf_tgt_poll_group_000", 00:18:56.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:56.869 "listen_address": { 00:18:56.869 "trtype": "TCP", 00:18:56.869 "adrfam": "IPv4", 00:18:56.869 "traddr": "10.0.0.2", 00:18:56.869 "trsvcid": "4420" 00:18:56.869 }, 00:18:56.869 "peer_address": { 00:18:56.869 "trtype": "TCP", 00:18:56.869 "adrfam": "IPv4", 00:18:56.869 "traddr": "10.0.0.1", 00:18:56.869 "trsvcid": "52944" 00:18:56.869 }, 00:18:56.869 "auth": { 00:18:56.869 "state": "completed", 00:18:56.869 "digest": "sha384", 00:18:56.869 "dhgroup": "ffdhe3072" 00:18:56.869 } 00:18:56.869 } 00:18:56.869 ]' 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:56.869 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.127 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.127 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.127 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.386 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:57.386 13:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.324 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.324 13:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.582 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:58.839 00:18:58.839 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.839 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.839 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.097 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.097 { 00:18:59.097 "cntlid": 67, 00:18:59.097 "qid": 0, 00:18:59.097 "state": "enabled", 00:18:59.097 "thread": "nvmf_tgt_poll_group_000", 00:18:59.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:18:59.097 "listen_address": { 00:18:59.098 "trtype": "TCP", 00:18:59.098 "adrfam": "IPv4", 00:18:59.098 "traddr": "10.0.0.2", 00:18:59.098 "trsvcid": "4420" 00:18:59.098 }, 00:18:59.098 "peer_address": { 00:18:59.098 "trtype": "TCP", 00:18:59.098 "adrfam": "IPv4", 00:18:59.098 "traddr": "10.0.0.1", 00:18:59.098 "trsvcid": "52968" 00:18:59.098 }, 00:18:59.098 "auth": { 00:18:59.098 "state": "completed", 00:18:59.098 "digest": "sha384", 00:18:59.098 "dhgroup": "ffdhe3072" 00:18:59.098 } 00:18:59.098 } 00:18:59.098 ]' 00:18:59.098 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.098 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.098 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.362 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.362 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.362 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.362 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.362 13:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.684 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:18:59.684 13:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.652 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.911 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.169 00:19:01.169 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.169 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.169 13:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.427 { 00:19:01.427 "cntlid": 69, 00:19:01.427 "qid": 0, 00:19:01.427 "state": "enabled", 00:19:01.427 "thread": "nvmf_tgt_poll_group_000", 00:19:01.427 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:01.427 "listen_address": { 00:19:01.427 "trtype": "TCP", 00:19:01.427 "adrfam": "IPv4", 00:19:01.427 "traddr": "10.0.0.2", 00:19:01.427 "trsvcid": "4420" 00:19:01.427 }, 00:19:01.427 "peer_address": { 00:19:01.427 "trtype": "TCP", 00:19:01.427 "adrfam": "IPv4", 00:19:01.427 "traddr": "10.0.0.1", 00:19:01.427 "trsvcid": "52992" 00:19:01.427 }, 00:19:01.427 "auth": { 00:19:01.427 "state": "completed", 00:19:01.427 "digest": "sha384", 00:19:01.427 "dhgroup": "ffdhe3072" 00:19:01.427 } 00:19:01.427 } 00:19:01.427 ]' 00:19:01.427 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.685 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.944 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:01.944 13:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:02.880 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.880 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:02.880 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.881 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.881 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.881 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.881 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.881 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:03.138 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.139 13:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:03.397 00:19:03.397 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.397 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.397 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.655 { 00:19:03.655 "cntlid": 71, 00:19:03.655 "qid": 0, 00:19:03.655 "state": "enabled", 00:19:03.655 "thread": "nvmf_tgt_poll_group_000", 00:19:03.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:03.655 "listen_address": { 00:19:03.655 "trtype": "TCP", 00:19:03.655 "adrfam": "IPv4", 00:19:03.655 "traddr": "10.0.0.2", 00:19:03.655 "trsvcid": "4420" 00:19:03.655 }, 00:19:03.655 "peer_address": { 00:19:03.655 "trtype": "TCP", 00:19:03.655 "adrfam": "IPv4", 00:19:03.655 "traddr": "10.0.0.1", 00:19:03.655 "trsvcid": "53014" 00:19:03.655 }, 00:19:03.655 "auth": { 00:19:03.655 "state": "completed", 00:19:03.655 "digest": "sha384", 00:19:03.655 "dhgroup": "ffdhe3072" 00:19:03.655 } 00:19:03.655 } 00:19:03.655 ]' 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.655 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.913 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.913 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.913 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.913 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.913 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.171 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:04.171 13:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:05.108 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.108 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:05.108 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.108 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.108 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.109 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.109 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.109 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.109 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.366 13:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.624 00:19:05.624 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.624 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.624 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.190 { 00:19:06.190 "cntlid": 73, 00:19:06.190 "qid": 0, 00:19:06.190 "state": "enabled", 00:19:06.190 "thread": "nvmf_tgt_poll_group_000", 00:19:06.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:06.190 "listen_address": { 00:19:06.190 "trtype": "TCP", 00:19:06.190 "adrfam": "IPv4", 00:19:06.190 "traddr": "10.0.0.2", 00:19:06.190 "trsvcid": "4420" 00:19:06.190 }, 00:19:06.190 "peer_address": { 00:19:06.190 "trtype": "TCP", 00:19:06.190 "adrfam": "IPv4", 00:19:06.190 "traddr": "10.0.0.1", 00:19:06.190 "trsvcid": "49726" 00:19:06.190 }, 00:19:06.190 "auth": { 00:19:06.190 "state": "completed", 00:19:06.190 "digest": "sha384", 00:19:06.190 "dhgroup": "ffdhe4096" 00:19:06.190 } 00:19:06.190 } 00:19:06.190 ]' 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.190 13:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.448 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:06.448 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.384 13:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.643 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.900 00:19:07.900 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.900 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.900 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.158 { 00:19:08.158 "cntlid": 75, 00:19:08.158 "qid": 0, 00:19:08.158 "state": "enabled", 00:19:08.158 "thread": "nvmf_tgt_poll_group_000", 00:19:08.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:08.158 "listen_address": { 00:19:08.158 "trtype": "TCP", 00:19:08.158 "adrfam": "IPv4", 00:19:08.158 "traddr": "10.0.0.2", 00:19:08.158 "trsvcid": "4420" 00:19:08.158 }, 00:19:08.158 "peer_address": { 00:19:08.158 "trtype": "TCP", 00:19:08.158 "adrfam": "IPv4", 00:19:08.158 "traddr": "10.0.0.1", 00:19:08.158 "trsvcid": "49748" 00:19:08.158 }, 00:19:08.158 "auth": { 00:19:08.158 "state": "completed", 00:19:08.158 "digest": "sha384", 00:19:08.158 "dhgroup": "ffdhe4096" 00:19:08.158 } 00:19:08.158 } 00:19:08.158 ]' 00:19:08.158 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.418 13:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.676 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:08.676 13:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.613 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.613 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.871 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.128 00:19:10.128 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.128 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.128 13:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.386 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.386 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.386 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.386 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.644 { 00:19:10.644 "cntlid": 77, 00:19:10.644 "qid": 0, 00:19:10.644 "state": "enabled", 00:19:10.644 "thread": "nvmf_tgt_poll_group_000", 00:19:10.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:10.644 "listen_address": { 00:19:10.644 "trtype": "TCP", 00:19:10.644 "adrfam": "IPv4", 00:19:10.644 "traddr": "10.0.0.2", 00:19:10.644 "trsvcid": "4420" 00:19:10.644 }, 00:19:10.644 "peer_address": { 00:19:10.644 "trtype": "TCP", 00:19:10.644 "adrfam": "IPv4", 00:19:10.644 "traddr": "10.0.0.1", 00:19:10.644 "trsvcid": "49792" 00:19:10.644 }, 00:19:10.644 "auth": { 00:19:10.644 "state": "completed", 00:19:10.644 "digest": "sha384", 00:19:10.644 "dhgroup": "ffdhe4096" 00:19:10.644 } 00:19:10.644 } 00:19:10.644 ]' 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.644 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.903 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:10.903 13:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:11.842 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.100 13:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.357 00:19:12.615 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.615 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.615 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.874 { 00:19:12.874 "cntlid": 79, 00:19:12.874 "qid": 0, 00:19:12.874 "state": "enabled", 00:19:12.874 "thread": "nvmf_tgt_poll_group_000", 00:19:12.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:12.874 "listen_address": { 00:19:12.874 "trtype": "TCP", 00:19:12.874 "adrfam": "IPv4", 00:19:12.874 "traddr": "10.0.0.2", 00:19:12.874 "trsvcid": "4420" 00:19:12.874 }, 00:19:12.874 "peer_address": { 00:19:12.874 "trtype": "TCP", 00:19:12.874 "adrfam": "IPv4", 00:19:12.874 "traddr": "10.0.0.1", 00:19:12.874 "trsvcid": "49820" 00:19:12.874 }, 00:19:12.874 "auth": { 00:19:12.874 "state": "completed", 00:19:12.874 "digest": "sha384", 00:19:12.874 "dhgroup": "ffdhe4096" 00:19:12.874 } 00:19:12.874 } 00:19:12.874 ]' 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.874 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.133 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:13.133 13:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.068 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.069 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.069 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.327 13:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.894 00:19:14.894 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.894 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.894 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.152 { 00:19:15.152 "cntlid": 81, 00:19:15.152 "qid": 0, 00:19:15.152 "state": "enabled", 00:19:15.152 "thread": "nvmf_tgt_poll_group_000", 00:19:15.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:15.152 "listen_address": { 00:19:15.152 "trtype": "TCP", 00:19:15.152 "adrfam": "IPv4", 00:19:15.152 "traddr": "10.0.0.2", 00:19:15.152 "trsvcid": "4420" 00:19:15.152 }, 00:19:15.152 "peer_address": { 00:19:15.152 "trtype": "TCP", 00:19:15.152 "adrfam": "IPv4", 00:19:15.152 "traddr": "10.0.0.1", 00:19:15.152 "trsvcid": "43938" 00:19:15.152 }, 00:19:15.152 "auth": { 00:19:15.152 "state": "completed", 00:19:15.152 "digest": "sha384", 00:19:15.152 "dhgroup": "ffdhe6144" 00:19:15.152 } 00:19:15.152 } 00:19:15.152 ]' 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.152 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.411 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.411 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.411 13:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.670 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:15.670 13:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.607 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.608 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:16.872 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.438 00:19:17.438 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.438 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.438 13:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.438 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.696 { 00:19:17.696 "cntlid": 83, 00:19:17.696 "qid": 0, 00:19:17.696 "state": "enabled", 00:19:17.696 "thread": "nvmf_tgt_poll_group_000", 00:19:17.696 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:17.696 "listen_address": { 00:19:17.696 "trtype": "TCP", 00:19:17.696 "adrfam": "IPv4", 00:19:17.696 "traddr": "10.0.0.2", 00:19:17.696 "trsvcid": "4420" 00:19:17.696 }, 00:19:17.696 "peer_address": { 00:19:17.696 "trtype": "TCP", 00:19:17.696 "adrfam": "IPv4", 00:19:17.696 "traddr": "10.0.0.1", 00:19:17.696 "trsvcid": "43962" 00:19:17.696 }, 00:19:17.696 "auth": { 00:19:17.696 "state": "completed", 00:19:17.696 "digest": "sha384", 00:19:17.696 "dhgroup": "ffdhe6144" 00:19:17.696 } 00:19:17.696 } 00:19:17.696 ]' 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.696 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.956 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:17.956 13:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.891 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:18.891 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.149 13:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.715 00:19:19.715 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.715 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.716 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.973 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.973 { 00:19:19.973 "cntlid": 85, 00:19:19.973 "qid": 0, 00:19:19.973 "state": "enabled", 00:19:19.973 "thread": "nvmf_tgt_poll_group_000", 00:19:19.973 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:19.973 "listen_address": { 00:19:19.973 "trtype": "TCP", 00:19:19.973 "adrfam": "IPv4", 00:19:19.973 "traddr": "10.0.0.2", 00:19:19.973 "trsvcid": "4420" 00:19:19.973 }, 00:19:19.973 "peer_address": { 00:19:19.973 "trtype": "TCP", 00:19:19.973 "adrfam": "IPv4", 00:19:19.973 "traddr": "10.0.0.1", 00:19:19.973 "trsvcid": "43980" 00:19:19.973 }, 00:19:19.973 "auth": { 00:19:19.973 "state": "completed", 00:19:19.973 "digest": "sha384", 00:19:19.974 "dhgroup": "ffdhe6144" 00:19:19.974 } 00:19:19.974 } 00:19:19.974 ]' 00:19:19.974 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.974 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.974 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.974 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:19.974 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.232 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.232 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.232 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.489 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:20.489 13:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.424 13:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:21.682 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:22.249 00:19:22.249 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.249 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.249 13:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.506 { 00:19:22.506 "cntlid": 87, 00:19:22.506 "qid": 0, 00:19:22.506 "state": "enabled", 00:19:22.506 "thread": "nvmf_tgt_poll_group_000", 00:19:22.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:22.506 "listen_address": { 00:19:22.506 "trtype": "TCP", 00:19:22.506 "adrfam": "IPv4", 00:19:22.506 "traddr": "10.0.0.2", 00:19:22.506 "trsvcid": "4420" 00:19:22.506 }, 00:19:22.506 "peer_address": { 00:19:22.506 "trtype": "TCP", 00:19:22.506 "adrfam": "IPv4", 00:19:22.506 "traddr": "10.0.0.1", 00:19:22.506 "trsvcid": "44016" 00:19:22.506 }, 00:19:22.506 "auth": { 00:19:22.506 "state": "completed", 00:19:22.506 "digest": "sha384", 00:19:22.506 "dhgroup": "ffdhe6144" 00:19:22.506 } 00:19:22.506 } 00:19:22.506 ]' 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.506 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.764 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:22.764 13:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:23.698 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.698 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.698 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:23.698 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.698 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.699 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.699 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.699 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.699 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.699 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:23.956 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.957 13:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.891 00:19:24.891 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.891 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.891 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:25.149 { 00:19:25.149 "cntlid": 89, 00:19:25.149 "qid": 0, 00:19:25.149 "state": "enabled", 00:19:25.149 "thread": "nvmf_tgt_poll_group_000", 00:19:25.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:25.149 "listen_address": { 00:19:25.149 "trtype": "TCP", 00:19:25.149 "adrfam": "IPv4", 00:19:25.149 "traddr": "10.0.0.2", 00:19:25.149 "trsvcid": "4420" 00:19:25.149 }, 00:19:25.149 "peer_address": { 00:19:25.149 "trtype": "TCP", 00:19:25.149 "adrfam": "IPv4", 00:19:25.149 "traddr": "10.0.0.1", 00:19:25.149 "trsvcid": "40006" 00:19:25.149 }, 00:19:25.149 "auth": { 00:19:25.149 "state": "completed", 00:19:25.149 "digest": "sha384", 00:19:25.149 "dhgroup": "ffdhe8192" 00:19:25.149 } 00:19:25.149 } 00:19:25.149 ]' 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.149 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.407 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.407 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.407 13:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.665 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:25.665 13:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.600 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.858 13:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.793 00:19:27.793 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.793 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.793 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:28.052 { 00:19:28.052 "cntlid": 91, 00:19:28.052 "qid": 0, 00:19:28.052 "state": "enabled", 00:19:28.052 "thread": "nvmf_tgt_poll_group_000", 00:19:28.052 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:28.052 "listen_address": { 00:19:28.052 "trtype": "TCP", 00:19:28.052 "adrfam": "IPv4", 00:19:28.052 "traddr": "10.0.0.2", 00:19:28.052 "trsvcid": "4420" 00:19:28.052 }, 00:19:28.052 "peer_address": { 00:19:28.052 "trtype": "TCP", 00:19:28.052 "adrfam": "IPv4", 00:19:28.052 "traddr": "10.0.0.1", 00:19:28.052 "trsvcid": "40034" 00:19:28.052 }, 00:19:28.052 "auth": { 00:19:28.052 "state": "completed", 00:19:28.052 "digest": "sha384", 00:19:28.052 "dhgroup": "ffdhe8192" 00:19:28.052 } 00:19:28.052 } 00:19:28.052 ]' 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.052 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.310 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:28.310 13:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.249 13:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.517 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.557 00:19:30.557 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.557 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.557 13:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.557 { 00:19:30.557 "cntlid": 93, 00:19:30.557 "qid": 0, 00:19:30.557 "state": "enabled", 00:19:30.557 "thread": "nvmf_tgt_poll_group_000", 00:19:30.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:30.557 "listen_address": { 00:19:30.557 "trtype": "TCP", 00:19:30.557 "adrfam": "IPv4", 00:19:30.557 "traddr": "10.0.0.2", 00:19:30.557 "trsvcid": "4420" 00:19:30.557 }, 00:19:30.557 "peer_address": { 00:19:30.557 "trtype": "TCP", 00:19:30.557 "adrfam": "IPv4", 00:19:30.557 "traddr": "10.0.0.1", 00:19:30.557 "trsvcid": "40058" 00:19:30.557 }, 00:19:30.557 "auth": { 00:19:30.557 "state": "completed", 00:19:30.557 "digest": "sha384", 00:19:30.557 "dhgroup": "ffdhe8192" 00:19:30.557 } 00:19:30.557 } 00:19:30.557 ]' 00:19:30.557 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.815 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.072 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:31.072 13:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.009 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.009 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:32.267 13:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.202 00:19:33.202 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.202 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.202 13:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:33.461 { 00:19:33.461 "cntlid": 95, 00:19:33.461 "qid": 0, 00:19:33.461 "state": "enabled", 00:19:33.461 "thread": "nvmf_tgt_poll_group_000", 00:19:33.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:33.461 "listen_address": { 00:19:33.461 "trtype": "TCP", 00:19:33.461 "adrfam": "IPv4", 00:19:33.461 "traddr": "10.0.0.2", 00:19:33.461 "trsvcid": "4420" 00:19:33.461 }, 00:19:33.461 "peer_address": { 00:19:33.461 "trtype": "TCP", 00:19:33.461 "adrfam": "IPv4", 00:19:33.461 "traddr": "10.0.0.1", 00:19:33.461 "trsvcid": "40090" 00:19:33.461 }, 00:19:33.461 "auth": { 00:19:33.461 "state": "completed", 00:19:33.461 "digest": "sha384", 00:19:33.461 "dhgroup": "ffdhe8192" 00:19:33.461 } 00:19:33.461 } 00:19:33.461 ]' 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.461 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:33.720 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.720 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.720 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.980 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:33.980 13:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:34.915 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.173 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:35.431 00:19:35.431 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.431 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.431 13:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.689 { 00:19:35.689 "cntlid": 97, 00:19:35.689 "qid": 0, 00:19:35.689 "state": "enabled", 00:19:35.689 "thread": "nvmf_tgt_poll_group_000", 00:19:35.689 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:35.689 "listen_address": { 00:19:35.689 "trtype": "TCP", 00:19:35.689 "adrfam": "IPv4", 00:19:35.689 "traddr": "10.0.0.2", 00:19:35.689 "trsvcid": "4420" 00:19:35.689 }, 00:19:35.689 "peer_address": { 00:19:35.689 "trtype": "TCP", 00:19:35.689 "adrfam": "IPv4", 00:19:35.689 "traddr": "10.0.0.1", 00:19:35.689 "trsvcid": "35034" 00:19:35.689 }, 00:19:35.689 "auth": { 00:19:35.689 "state": "completed", 00:19:35.689 "digest": "sha512", 00:19:35.689 "dhgroup": "null" 00:19:35.689 } 00:19:35.689 } 00:19:35.689 ]' 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.689 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.257 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:36.257 13:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.198 13:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:37.766 00:19:37.766 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.766 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.766 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.024 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.024 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.025 { 00:19:38.025 "cntlid": 99, 00:19:38.025 "qid": 0, 00:19:38.025 "state": "enabled", 00:19:38.025 "thread": "nvmf_tgt_poll_group_000", 00:19:38.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:38.025 "listen_address": { 00:19:38.025 "trtype": "TCP", 00:19:38.025 "adrfam": "IPv4", 00:19:38.025 "traddr": "10.0.0.2", 00:19:38.025 "trsvcid": "4420" 00:19:38.025 }, 00:19:38.025 "peer_address": { 00:19:38.025 "trtype": "TCP", 00:19:38.025 "adrfam": "IPv4", 00:19:38.025 "traddr": "10.0.0.1", 00:19:38.025 "trsvcid": "35046" 00:19:38.025 }, 00:19:38.025 "auth": { 00:19:38.025 "state": "completed", 00:19:38.025 "digest": "sha512", 00:19:38.025 "dhgroup": "null" 00:19:38.025 } 00:19:38.025 } 00:19:38.025 ]' 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.025 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.283 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:38.283 13:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.222 13:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.480 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:39.738 00:19:39.995 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.995 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.995 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.253 { 00:19:40.253 "cntlid": 101, 00:19:40.253 "qid": 0, 00:19:40.253 "state": "enabled", 00:19:40.253 "thread": "nvmf_tgt_poll_group_000", 00:19:40.253 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:40.253 "listen_address": { 00:19:40.253 "trtype": "TCP", 00:19:40.253 "adrfam": "IPv4", 00:19:40.253 "traddr": "10.0.0.2", 00:19:40.253 "trsvcid": "4420" 00:19:40.253 }, 00:19:40.253 "peer_address": { 00:19:40.253 "trtype": "TCP", 00:19:40.253 "adrfam": "IPv4", 00:19:40.253 "traddr": "10.0.0.1", 00:19:40.253 "trsvcid": "35074" 00:19:40.253 }, 00:19:40.253 "auth": { 00:19:40.253 "state": "completed", 00:19:40.253 "digest": "sha512", 00:19:40.253 "dhgroup": "null" 00:19:40.253 } 00:19:40.253 } 00:19:40.253 ]' 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.253 13:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.511 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:40.511 13:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.444 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:41.701 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.702 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:41.959 00:19:41.959 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.959 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.959 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.215 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.473 { 00:19:42.473 "cntlid": 103, 00:19:42.473 "qid": 0, 00:19:42.473 "state": "enabled", 00:19:42.473 "thread": "nvmf_tgt_poll_group_000", 00:19:42.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:42.473 "listen_address": { 00:19:42.473 "trtype": "TCP", 00:19:42.473 "adrfam": "IPv4", 00:19:42.473 "traddr": "10.0.0.2", 00:19:42.473 "trsvcid": "4420" 00:19:42.473 }, 00:19:42.473 "peer_address": { 00:19:42.473 "trtype": "TCP", 00:19:42.473 "adrfam": "IPv4", 00:19:42.473 "traddr": "10.0.0.1", 00:19:42.473 "trsvcid": "35104" 00:19:42.473 }, 00:19:42.473 "auth": { 00:19:42.473 "state": "completed", 00:19:42.473 "digest": "sha512", 00:19:42.473 "dhgroup": "null" 00:19:42.473 } 00:19:42.473 } 00:19:42.473 ]' 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.473 13:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.473 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:42.473 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.473 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.473 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.473 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.731 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:42.731 13:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:43.664 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.922 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.180 00:19:44.180 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:44.180 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:44.180 13:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:44.437 { 00:19:44.437 "cntlid": 105, 00:19:44.437 "qid": 0, 00:19:44.437 "state": "enabled", 00:19:44.437 "thread": "nvmf_tgt_poll_group_000", 00:19:44.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:44.437 "listen_address": { 00:19:44.437 "trtype": "TCP", 00:19:44.437 "adrfam": "IPv4", 00:19:44.437 "traddr": "10.0.0.2", 00:19:44.437 "trsvcid": "4420" 00:19:44.437 }, 00:19:44.437 "peer_address": { 00:19:44.437 "trtype": "TCP", 00:19:44.437 "adrfam": "IPv4", 00:19:44.437 "traddr": "10.0.0.1", 00:19:44.437 "trsvcid": "35992" 00:19:44.437 }, 00:19:44.437 "auth": { 00:19:44.437 "state": "completed", 00:19:44.437 "digest": "sha512", 00:19:44.437 "dhgroup": "ffdhe2048" 00:19:44.437 } 00:19:44.437 } 00:19:44.437 ]' 00:19:44.437 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.696 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.954 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:44.954 13:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:45.886 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.144 13:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.402 00:19:46.402 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.402 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.402 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:46.660 { 00:19:46.660 "cntlid": 107, 00:19:46.660 "qid": 0, 00:19:46.660 "state": "enabled", 00:19:46.660 "thread": "nvmf_tgt_poll_group_000", 00:19:46.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:46.660 "listen_address": { 00:19:46.660 "trtype": "TCP", 00:19:46.660 "adrfam": "IPv4", 00:19:46.660 "traddr": "10.0.0.2", 00:19:46.660 "trsvcid": "4420" 00:19:46.660 }, 00:19:46.660 "peer_address": { 00:19:46.660 "trtype": "TCP", 00:19:46.660 "adrfam": "IPv4", 00:19:46.660 "traddr": "10.0.0.1", 00:19:46.660 "trsvcid": "36030" 00:19:46.660 }, 00:19:46.660 "auth": { 00:19:46.660 "state": "completed", 00:19:46.660 "digest": "sha512", 00:19:46.660 "dhgroup": "ffdhe2048" 00:19:46.660 } 00:19:46.660 } 00:19:46.660 ]' 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.660 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:46.917 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.917 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:46.917 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.917 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.917 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.174 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:47.174 13:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:48.106 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.364 13:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.622 00:19:48.622 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.622 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.622 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.188 { 00:19:49.188 "cntlid": 109, 00:19:49.188 "qid": 0, 00:19:49.188 "state": "enabled", 00:19:49.188 "thread": "nvmf_tgt_poll_group_000", 00:19:49.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:49.188 "listen_address": { 00:19:49.188 "trtype": "TCP", 00:19:49.188 "adrfam": "IPv4", 00:19:49.188 "traddr": "10.0.0.2", 00:19:49.188 "trsvcid": "4420" 00:19:49.188 }, 00:19:49.188 "peer_address": { 00:19:49.188 "trtype": "TCP", 00:19:49.188 "adrfam": "IPv4", 00:19:49.188 "traddr": "10.0.0.1", 00:19:49.188 "trsvcid": "36050" 00:19:49.188 }, 00:19:49.188 "auth": { 00:19:49.188 "state": "completed", 00:19:49.188 "digest": "sha512", 00:19:49.188 "dhgroup": "ffdhe2048" 00:19:49.188 } 00:19:49.188 } 00:19:49.188 ]' 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.188 13:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.446 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:49.446 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.379 13:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.637 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:50.894 00:19:50.894 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:50.894 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:50.894 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.458 { 00:19:51.458 "cntlid": 111, 00:19:51.458 "qid": 0, 00:19:51.458 "state": "enabled", 00:19:51.458 "thread": "nvmf_tgt_poll_group_000", 00:19:51.458 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:51.458 "listen_address": { 00:19:51.458 "trtype": "TCP", 00:19:51.458 "adrfam": "IPv4", 00:19:51.458 "traddr": "10.0.0.2", 00:19:51.458 "trsvcid": "4420" 00:19:51.458 }, 00:19:51.458 "peer_address": { 00:19:51.458 "trtype": "TCP", 00:19:51.458 "adrfam": "IPv4", 00:19:51.458 "traddr": "10.0.0.1", 00:19:51.458 "trsvcid": "36086" 00:19:51.458 }, 00:19:51.458 "auth": { 00:19:51.458 "state": "completed", 00:19:51.458 "digest": "sha512", 00:19:51.458 "dhgroup": "ffdhe2048" 00:19:51.458 } 00:19:51.458 } 00:19:51.458 ]' 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.458 13:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.458 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.458 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.458 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.715 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:51.715 13:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.689 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.983 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.241 00:19:53.241 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.241 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.241 13:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.499 { 00:19:53.499 "cntlid": 113, 00:19:53.499 "qid": 0, 00:19:53.499 "state": "enabled", 00:19:53.499 "thread": "nvmf_tgt_poll_group_000", 00:19:53.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:53.499 "listen_address": { 00:19:53.499 "trtype": "TCP", 00:19:53.499 "adrfam": "IPv4", 00:19:53.499 "traddr": "10.0.0.2", 00:19:53.499 "trsvcid": "4420" 00:19:53.499 }, 00:19:53.499 "peer_address": { 00:19:53.499 "trtype": "TCP", 00:19:53.499 "adrfam": "IPv4", 00:19:53.499 "traddr": "10.0.0.1", 00:19:53.499 "trsvcid": "36114" 00:19:53.499 }, 00:19:53.499 "auth": { 00:19:53.499 "state": "completed", 00:19:53.499 "digest": "sha512", 00:19:53.499 "dhgroup": "ffdhe3072" 00:19:53.499 } 00:19:53.499 } 00:19:53.499 ]' 00:19:53.499 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.757 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.015 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:54.015 13:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.949 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.207 13:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.771 00:19:55.771 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:55.771 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.771 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.029 { 00:19:56.029 "cntlid": 115, 00:19:56.029 "qid": 0, 00:19:56.029 "state": "enabled", 00:19:56.029 "thread": "nvmf_tgt_poll_group_000", 00:19:56.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:56.029 "listen_address": { 00:19:56.029 "trtype": "TCP", 00:19:56.029 "adrfam": "IPv4", 00:19:56.029 "traddr": "10.0.0.2", 00:19:56.029 "trsvcid": "4420" 00:19:56.029 }, 00:19:56.029 "peer_address": { 00:19:56.029 "trtype": "TCP", 00:19:56.029 "adrfam": "IPv4", 00:19:56.029 "traddr": "10.0.0.1", 00:19:56.029 "trsvcid": "42226" 00:19:56.029 }, 00:19:56.029 "auth": { 00:19:56.029 "state": "completed", 00:19:56.029 "digest": "sha512", 00:19:56.029 "dhgroup": "ffdhe3072" 00:19:56.029 } 00:19:56.029 } 00:19:56.029 ]' 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.029 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.287 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:56.287 13:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.225 13:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.483 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.051 00:19:58.051 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.051 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.051 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.309 { 00:19:58.309 "cntlid": 117, 00:19:58.309 "qid": 0, 00:19:58.309 "state": "enabled", 00:19:58.309 "thread": "nvmf_tgt_poll_group_000", 00:19:58.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:19:58.309 "listen_address": { 00:19:58.309 "trtype": "TCP", 00:19:58.309 "adrfam": "IPv4", 00:19:58.309 "traddr": "10.0.0.2", 00:19:58.309 "trsvcid": "4420" 00:19:58.309 }, 00:19:58.309 "peer_address": { 00:19:58.309 "trtype": "TCP", 00:19:58.309 "adrfam": "IPv4", 00:19:58.309 "traddr": "10.0.0.1", 00:19:58.309 "trsvcid": "42250" 00:19:58.309 }, 00:19:58.309 "auth": { 00:19:58.309 "state": "completed", 00:19:58.309 "digest": "sha512", 00:19:58.309 "dhgroup": "ffdhe3072" 00:19:58.309 } 00:19:58.309 } 00:19:58.309 ]' 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.309 13:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.567 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:58.567 13:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.502 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.761 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:00.326 00:20:00.326 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.326 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.326 13:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.584 { 00:20:00.584 "cntlid": 119, 00:20:00.584 "qid": 0, 00:20:00.584 "state": "enabled", 00:20:00.584 "thread": "nvmf_tgt_poll_group_000", 00:20:00.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:00.584 "listen_address": { 00:20:00.584 "trtype": "TCP", 00:20:00.584 "adrfam": "IPv4", 00:20:00.584 "traddr": "10.0.0.2", 00:20:00.584 "trsvcid": "4420" 00:20:00.584 }, 00:20:00.584 "peer_address": { 00:20:00.584 "trtype": "TCP", 00:20:00.584 "adrfam": "IPv4", 00:20:00.584 "traddr": "10.0.0.1", 00:20:00.584 "trsvcid": "42288" 00:20:00.584 }, 00:20:00.584 "auth": { 00:20:00.584 "state": "completed", 00:20:00.584 "digest": "sha512", 00:20:00.584 "dhgroup": "ffdhe3072" 00:20:00.584 } 00:20:00.584 } 00:20:00.584 ]' 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.584 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.841 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:00.841 13:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:01.775 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.033 13:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.599 00:20:02.599 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.599 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.599 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.857 { 00:20:02.857 "cntlid": 121, 00:20:02.857 "qid": 0, 00:20:02.857 "state": "enabled", 00:20:02.857 "thread": "nvmf_tgt_poll_group_000", 00:20:02.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:02.857 "listen_address": { 00:20:02.857 "trtype": "TCP", 00:20:02.857 "adrfam": "IPv4", 00:20:02.857 "traddr": "10.0.0.2", 00:20:02.857 "trsvcid": "4420" 00:20:02.857 }, 00:20:02.857 "peer_address": { 00:20:02.857 "trtype": "TCP", 00:20:02.857 "adrfam": "IPv4", 00:20:02.857 "traddr": "10.0.0.1", 00:20:02.857 "trsvcid": "42306" 00:20:02.857 }, 00:20:02.857 "auth": { 00:20:02.857 "state": "completed", 00:20:02.857 "digest": "sha512", 00:20:02.857 "dhgroup": "ffdhe4096" 00:20:02.857 } 00:20:02.857 } 00:20:02.857 ]' 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.857 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.114 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:03.114 13:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.047 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.305 13:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.882 00:20:04.882 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.882 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.882 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.139 { 00:20:05.139 "cntlid": 123, 00:20:05.139 "qid": 0, 00:20:05.139 "state": "enabled", 00:20:05.139 "thread": "nvmf_tgt_poll_group_000", 00:20:05.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:05.139 "listen_address": { 00:20:05.139 "trtype": "TCP", 00:20:05.139 "adrfam": "IPv4", 00:20:05.139 "traddr": "10.0.0.2", 00:20:05.139 "trsvcid": "4420" 00:20:05.139 }, 00:20:05.139 "peer_address": { 00:20:05.139 "trtype": "TCP", 00:20:05.139 "adrfam": "IPv4", 00:20:05.139 "traddr": "10.0.0.1", 00:20:05.139 "trsvcid": "60890" 00:20:05.139 }, 00:20:05.139 "auth": { 00:20:05.139 "state": "completed", 00:20:05.139 "digest": "sha512", 00:20:05.139 "dhgroup": "ffdhe4096" 00:20:05.139 } 00:20:05.139 } 00:20:05.139 ]' 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.139 13:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.396 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:05.396 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.331 13:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.588 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.844 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.844 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.844 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.844 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:07.101 00:20:07.101 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.101 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.101 13:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.359 { 00:20:07.359 "cntlid": 125, 00:20:07.359 "qid": 0, 00:20:07.359 "state": "enabled", 00:20:07.359 "thread": "nvmf_tgt_poll_group_000", 00:20:07.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:07.359 "listen_address": { 00:20:07.359 "trtype": "TCP", 00:20:07.359 "adrfam": "IPv4", 00:20:07.359 "traddr": "10.0.0.2", 00:20:07.359 "trsvcid": "4420" 00:20:07.359 }, 00:20:07.359 "peer_address": { 00:20:07.359 "trtype": "TCP", 00:20:07.359 "adrfam": "IPv4", 00:20:07.359 "traddr": "10.0.0.1", 00:20:07.359 "trsvcid": "60906" 00:20:07.359 }, 00:20:07.359 "auth": { 00:20:07.359 "state": "completed", 00:20:07.359 "digest": "sha512", 00:20:07.359 "dhgroup": "ffdhe4096" 00:20:07.359 } 00:20:07.359 } 00:20:07.359 ]' 00:20:07.359 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.616 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.874 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:07.874 13:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:08.807 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.066 13:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.631 00:20:09.631 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.631 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.631 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.889 { 00:20:09.889 "cntlid": 127, 00:20:09.889 "qid": 0, 00:20:09.889 "state": "enabled", 00:20:09.889 "thread": "nvmf_tgt_poll_group_000", 00:20:09.889 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:09.889 "listen_address": { 00:20:09.889 "trtype": "TCP", 00:20:09.889 "adrfam": "IPv4", 00:20:09.889 "traddr": "10.0.0.2", 00:20:09.889 "trsvcid": "4420" 00:20:09.889 }, 00:20:09.889 "peer_address": { 00:20:09.889 "trtype": "TCP", 00:20:09.889 "adrfam": "IPv4", 00:20:09.889 "traddr": "10.0.0.1", 00:20:09.889 "trsvcid": "60938" 00:20:09.889 }, 00:20:09.889 "auth": { 00:20:09.889 "state": "completed", 00:20:09.889 "digest": "sha512", 00:20:09.889 "dhgroup": "ffdhe4096" 00:20:09.889 } 00:20:09.889 } 00:20:09.889 ]' 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.889 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.146 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:10.147 13:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.079 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.080 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.337 13:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.902 00:20:11.902 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.902 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.902 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.160 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.161 { 00:20:12.161 "cntlid": 129, 00:20:12.161 "qid": 0, 00:20:12.161 "state": "enabled", 00:20:12.161 "thread": "nvmf_tgt_poll_group_000", 00:20:12.161 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:12.161 "listen_address": { 00:20:12.161 "trtype": "TCP", 00:20:12.161 "adrfam": "IPv4", 00:20:12.161 "traddr": "10.0.0.2", 00:20:12.161 "trsvcid": "4420" 00:20:12.161 }, 00:20:12.161 "peer_address": { 00:20:12.161 "trtype": "TCP", 00:20:12.161 "adrfam": "IPv4", 00:20:12.161 "traddr": "10.0.0.1", 00:20:12.161 "trsvcid": "60964" 00:20:12.161 }, 00:20:12.161 "auth": { 00:20:12.161 "state": "completed", 00:20:12.161 "digest": "sha512", 00:20:12.161 "dhgroup": "ffdhe6144" 00:20:12.161 } 00:20:12.161 } 00:20:12.161 ]' 00:20:12.161 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.161 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.161 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.161 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:12.161 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.418 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.418 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.418 13:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.675 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:12.675 13:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.607 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.608 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.866 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.445 00:20:14.445 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.445 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.445 13:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.703 { 00:20:14.703 "cntlid": 131, 00:20:14.703 "qid": 0, 00:20:14.703 "state": "enabled", 00:20:14.703 "thread": "nvmf_tgt_poll_group_000", 00:20:14.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:14.703 "listen_address": { 00:20:14.703 "trtype": "TCP", 00:20:14.703 "adrfam": "IPv4", 00:20:14.703 "traddr": "10.0.0.2", 00:20:14.703 "trsvcid": "4420" 00:20:14.703 }, 00:20:14.703 "peer_address": { 00:20:14.703 "trtype": "TCP", 00:20:14.703 "adrfam": "IPv4", 00:20:14.703 "traddr": "10.0.0.1", 00:20:14.703 "trsvcid": "53864" 00:20:14.703 }, 00:20:14.703 "auth": { 00:20:14.703 "state": "completed", 00:20:14.703 "digest": "sha512", 00:20:14.703 "dhgroup": "ffdhe6144" 00:20:14.703 } 00:20:14.703 } 00:20:14.703 ]' 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.703 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.267 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:15.267 13:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:15.832 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.397 13:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.654 00:20:16.654 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.654 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.654 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.220 { 00:20:17.220 "cntlid": 133, 00:20:17.220 "qid": 0, 00:20:17.220 "state": "enabled", 00:20:17.220 "thread": "nvmf_tgt_poll_group_000", 00:20:17.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:17.220 "listen_address": { 00:20:17.220 "trtype": "TCP", 00:20:17.220 "adrfam": "IPv4", 00:20:17.220 "traddr": "10.0.0.2", 00:20:17.220 "trsvcid": "4420" 00:20:17.220 }, 00:20:17.220 "peer_address": { 00:20:17.220 "trtype": "TCP", 00:20:17.220 "adrfam": "IPv4", 00:20:17.220 "traddr": "10.0.0.1", 00:20:17.220 "trsvcid": "53902" 00:20:17.220 }, 00:20:17.220 "auth": { 00:20:17.220 "state": "completed", 00:20:17.220 "digest": "sha512", 00:20:17.220 "dhgroup": "ffdhe6144" 00:20:17.220 } 00:20:17.220 } 00:20:17.220 ]' 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.220 13:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.477 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:17.477 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:18.444 13:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.444 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.444 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.445 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.727 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.292 00:20:19.292 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.292 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.292 13:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.550 { 00:20:19.550 "cntlid": 135, 00:20:19.550 "qid": 0, 00:20:19.550 "state": "enabled", 00:20:19.550 "thread": "nvmf_tgt_poll_group_000", 00:20:19.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:19.550 "listen_address": { 00:20:19.550 "trtype": "TCP", 00:20:19.550 "adrfam": "IPv4", 00:20:19.550 "traddr": "10.0.0.2", 00:20:19.550 "trsvcid": "4420" 00:20:19.550 }, 00:20:19.550 "peer_address": { 00:20:19.550 "trtype": "TCP", 00:20:19.550 "adrfam": "IPv4", 00:20:19.550 "traddr": "10.0.0.1", 00:20:19.550 "trsvcid": "53936" 00:20:19.550 }, 00:20:19.550 "auth": { 00:20:19.550 "state": "completed", 00:20:19.550 "digest": "sha512", 00:20:19.550 "dhgroup": "ffdhe6144" 00:20:19.550 } 00:20:19.550 } 00:20:19.550 ]' 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.550 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.808 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.808 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.809 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.066 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:20.066 13:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.000 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.000 13:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:21.931 00:20:21.931 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.931 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.931 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.189 { 00:20:22.189 "cntlid": 137, 00:20:22.189 "qid": 0, 00:20:22.189 "state": "enabled", 00:20:22.189 "thread": "nvmf_tgt_poll_group_000", 00:20:22.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:22.189 "listen_address": { 00:20:22.189 "trtype": "TCP", 00:20:22.189 "adrfam": "IPv4", 00:20:22.189 "traddr": "10.0.0.2", 00:20:22.189 "trsvcid": "4420" 00:20:22.189 }, 00:20:22.189 "peer_address": { 00:20:22.189 "trtype": "TCP", 00:20:22.189 "adrfam": "IPv4", 00:20:22.189 "traddr": "10.0.0.1", 00:20:22.189 "trsvcid": "53962" 00:20:22.189 }, 00:20:22.189 "auth": { 00:20:22.189 "state": "completed", 00:20:22.189 "digest": "sha512", 00:20:22.189 "dhgroup": "ffdhe8192" 00:20:22.189 } 00:20:22.189 } 00:20:22.189 ]' 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.189 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.447 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.447 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.447 13:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.704 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:22.704 13:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.635 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:23.635 13:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.567 00:20:24.567 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.567 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.567 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.825 { 00:20:24.825 "cntlid": 139, 00:20:24.825 "qid": 0, 00:20:24.825 "state": "enabled", 00:20:24.825 "thread": "nvmf_tgt_poll_group_000", 00:20:24.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:24.825 "listen_address": { 00:20:24.825 "trtype": "TCP", 00:20:24.825 "adrfam": "IPv4", 00:20:24.825 "traddr": "10.0.0.2", 00:20:24.825 "trsvcid": "4420" 00:20:24.825 }, 00:20:24.825 "peer_address": { 00:20:24.825 "trtype": "TCP", 00:20:24.825 "adrfam": "IPv4", 00:20:24.825 "traddr": "10.0.0.1", 00:20:24.825 "trsvcid": "34212" 00:20:24.825 }, 00:20:24.825 "auth": { 00:20:24.825 "state": "completed", 00:20:24.825 "digest": "sha512", 00:20:24.825 "dhgroup": "ffdhe8192" 00:20:24.825 } 00:20:24.825 } 00:20:24.825 ]' 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.825 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.083 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.083 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.083 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.341 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:25.341 13:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: --dhchap-ctrl-secret DHHC-1:02:NGExNzlmZTk2MjgwNTJkOTIyMzcxNjFkZThjYjkyOGEwY2QwM2FlZjczOTJkMjg3HG4CXg==: 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.275 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.533 13:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.533 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.533 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.533 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:26.533 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.469 00:20:27.469 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.469 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.469 13:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.469 { 00:20:27.469 "cntlid": 141, 00:20:27.469 "qid": 0, 00:20:27.469 "state": "enabled", 00:20:27.469 "thread": "nvmf_tgt_poll_group_000", 00:20:27.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:27.469 "listen_address": { 00:20:27.469 "trtype": "TCP", 00:20:27.469 "adrfam": "IPv4", 00:20:27.469 "traddr": "10.0.0.2", 00:20:27.469 "trsvcid": "4420" 00:20:27.469 }, 00:20:27.469 "peer_address": { 00:20:27.469 "trtype": "TCP", 00:20:27.469 "adrfam": "IPv4", 00:20:27.469 "traddr": "10.0.0.1", 00:20:27.469 "trsvcid": "34230" 00:20:27.469 }, 00:20:27.469 "auth": { 00:20:27.469 "state": "completed", 00:20:27.469 "digest": "sha512", 00:20:27.469 "dhgroup": "ffdhe8192" 00:20:27.469 } 00:20:27.469 } 00:20:27.469 ]' 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.469 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.727 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.727 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.727 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.727 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.727 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.984 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:27.984 13:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:01:OGVhZmMyMmZhYzYxODMyNzc1ODE1YmQ2NzAwMWZkZDRxkSb6: 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.930 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.930 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.188 13:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.122 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.122 { 00:20:30.122 "cntlid": 143, 00:20:30.122 "qid": 0, 00:20:30.122 "state": "enabled", 00:20:30.122 "thread": "nvmf_tgt_poll_group_000", 00:20:30.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:30.122 "listen_address": { 00:20:30.122 "trtype": "TCP", 00:20:30.122 "adrfam": "IPv4", 00:20:30.122 "traddr": "10.0.0.2", 00:20:30.122 "trsvcid": "4420" 00:20:30.122 }, 00:20:30.122 "peer_address": { 00:20:30.122 "trtype": "TCP", 00:20:30.122 "adrfam": "IPv4", 00:20:30.122 "traddr": "10.0.0.1", 00:20:30.122 "trsvcid": "34256" 00:20:30.122 }, 00:20:30.122 "auth": { 00:20:30.122 "state": "completed", 00:20:30.122 "digest": "sha512", 00:20:30.122 "dhgroup": "ffdhe8192" 00:20:30.122 } 00:20:30.122 } 00:20:30.122 ]' 00:20:30.122 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.381 13:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.639 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:30.639 13:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.573 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:31.831 13:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.765 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.765 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.765 { 00:20:32.765 "cntlid": 145, 00:20:32.765 "qid": 0, 00:20:32.765 "state": "enabled", 00:20:32.765 "thread": "nvmf_tgt_poll_group_000", 00:20:32.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:32.765 "listen_address": { 00:20:32.765 "trtype": "TCP", 00:20:32.765 "adrfam": "IPv4", 00:20:32.765 "traddr": "10.0.0.2", 00:20:32.765 "trsvcid": "4420" 00:20:32.765 }, 00:20:32.765 "peer_address": { 00:20:32.765 "trtype": "TCP", 00:20:32.765 "adrfam": "IPv4", 00:20:32.765 "traddr": "10.0.0.1", 00:20:32.765 "trsvcid": "34282" 00:20:32.765 }, 00:20:32.765 "auth": { 00:20:32.765 "state": "completed", 00:20:32.765 "digest": "sha512", 00:20:32.765 "dhgroup": "ffdhe8192" 00:20:32.765 } 00:20:32.765 } 00:20:32.765 ]' 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.023 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.281 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:33.281 13:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:00:Y2NkZjY0MDIxNGU1YmI3NzRjMTEyZjU3MzVjYjE2NmVjZThmYjRmMWEyNzcxMTgxrtweJQ==: --dhchap-ctrl-secret DHHC-1:03:ZjM3ZjAzN2U0N2Q3OWY0OTJjMzY3MGY1Yjc1MTRhOWNlNmQzY2Y1OThhODk2ZGIyNjMxMDAxNjAyMGJiYjVhML8lm4k=: 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:34.217 13:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:35.150 request: 00:20:35.150 { 00:20:35.150 "name": "nvme0", 00:20:35.150 "trtype": "tcp", 00:20:35.150 "traddr": "10.0.0.2", 00:20:35.150 "adrfam": "ipv4", 00:20:35.150 "trsvcid": "4420", 00:20:35.150 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:35.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:35.150 "prchk_reftag": false, 00:20:35.150 "prchk_guard": false, 00:20:35.150 "hdgst": false, 00:20:35.150 "ddgst": false, 00:20:35.150 "dhchap_key": "key2", 00:20:35.150 "allow_unrecognized_csi": false, 00:20:35.150 "method": "bdev_nvme_attach_controller", 00:20:35.150 "req_id": 1 00:20:35.150 } 00:20:35.150 Got JSON-RPC error response 00:20:35.150 response: 00:20:35.150 { 00:20:35.150 "code": -5, 00:20:35.150 "message": "Input/output error" 00:20:35.150 } 00:20:35.150 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:35.150 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:35.150 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:35.150 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:35.150 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:35.151 13:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:36.086 request: 00:20:36.086 { 00:20:36.086 "name": "nvme0", 00:20:36.086 "trtype": "tcp", 00:20:36.086 "traddr": "10.0.0.2", 00:20:36.086 "adrfam": "ipv4", 00:20:36.086 "trsvcid": "4420", 00:20:36.086 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:36.086 "prchk_reftag": false, 00:20:36.086 "prchk_guard": false, 00:20:36.086 "hdgst": false, 00:20:36.086 "ddgst": false, 00:20:36.086 "dhchap_key": "key1", 00:20:36.086 "dhchap_ctrlr_key": "ckey2", 00:20:36.086 "allow_unrecognized_csi": false, 00:20:36.086 "method": "bdev_nvme_attach_controller", 00:20:36.086 "req_id": 1 00:20:36.086 } 00:20:36.086 Got JSON-RPC error response 00:20:36.086 response: 00:20:36.086 { 00:20:36.086 "code": -5, 00:20:36.086 "message": "Input/output error" 00:20:36.086 } 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.086 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.087 13:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.652 request: 00:20:36.652 { 00:20:36.652 "name": "nvme0", 00:20:36.652 "trtype": "tcp", 00:20:36.652 "traddr": "10.0.0.2", 00:20:36.652 "adrfam": "ipv4", 00:20:36.652 "trsvcid": "4420", 00:20:36.652 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:36.652 "prchk_reftag": false, 00:20:36.652 "prchk_guard": false, 00:20:36.652 "hdgst": false, 00:20:36.652 "ddgst": false, 00:20:36.652 "dhchap_key": "key1", 00:20:36.652 "dhchap_ctrlr_key": "ckey1", 00:20:36.652 "allow_unrecognized_csi": false, 00:20:36.653 "method": "bdev_nvme_attach_controller", 00:20:36.653 "req_id": 1 00:20:36.653 } 00:20:36.653 Got JSON-RPC error response 00:20:36.653 response: 00:20:36.653 { 00:20:36.653 "code": -5, 00:20:36.653 "message": "Input/output error" 00:20:36.653 } 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1793246 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1793246 ']' 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1793246 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:36.653 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1793246 00:20:36.910 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:36.910 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:36.910 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1793246' 00:20:36.910 killing process with pid 1793246 00:20:36.910 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1793246 00:20:36.910 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1793246 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # nvmfpid=1815160 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # waitforlisten 1815160 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1815160 ']' 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.168 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1815160 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 1815160 ']' 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.426 13:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.684 null0 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3t4 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.88M ]] 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.88M 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.556 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.684 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.p9j ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.p9j 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.Kc5 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.QtK ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QtK 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.xV2 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.943 13:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.317 nvme0n1 00:20:39.317 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.317 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.317 13:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.575 { 00:20:39.575 "cntlid": 1, 00:20:39.575 "qid": 0, 00:20:39.575 "state": "enabled", 00:20:39.575 "thread": "nvmf_tgt_poll_group_000", 00:20:39.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:39.575 "listen_address": { 00:20:39.575 "trtype": "TCP", 00:20:39.575 "adrfam": "IPv4", 00:20:39.575 "traddr": "10.0.0.2", 00:20:39.575 "trsvcid": "4420" 00:20:39.575 }, 00:20:39.575 "peer_address": { 00:20:39.575 "trtype": "TCP", 00:20:39.575 "adrfam": "IPv4", 00:20:39.575 "traddr": "10.0.0.1", 00:20:39.575 "trsvcid": "59012" 00:20:39.575 }, 00:20:39.575 "auth": { 00:20:39.575 "state": "completed", 00:20:39.575 "digest": "sha512", 00:20:39.575 "dhgroup": "ffdhe8192" 00:20:39.575 } 00:20:39.575 } 00:20:39.575 ]' 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.575 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.833 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:39.833 13:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key3 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:40.765 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.028 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.286 request: 00:20:41.286 { 00:20:41.286 "name": "nvme0", 00:20:41.286 "trtype": "tcp", 00:20:41.286 "traddr": "10.0.0.2", 00:20:41.286 "adrfam": "ipv4", 00:20:41.286 "trsvcid": "4420", 00:20:41.286 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:41.286 "prchk_reftag": false, 00:20:41.286 "prchk_guard": false, 00:20:41.286 "hdgst": false, 00:20:41.286 "ddgst": false, 00:20:41.286 "dhchap_key": "key3", 00:20:41.286 "allow_unrecognized_csi": false, 00:20:41.286 "method": "bdev_nvme_attach_controller", 00:20:41.286 "req_id": 1 00:20:41.286 } 00:20:41.286 Got JSON-RPC error response 00:20:41.286 response: 00:20:41.286 { 00:20:41.286 "code": -5, 00:20:41.286 "message": "Input/output error" 00:20:41.286 } 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:41.286 13:31:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.852 request: 00:20:41.852 { 00:20:41.852 "name": "nvme0", 00:20:41.852 "trtype": "tcp", 00:20:41.852 "traddr": "10.0.0.2", 00:20:41.852 "adrfam": "ipv4", 00:20:41.852 "trsvcid": "4420", 00:20:41.852 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:41.852 "prchk_reftag": false, 00:20:41.852 "prchk_guard": false, 00:20:41.852 "hdgst": false, 00:20:41.852 "ddgst": false, 00:20:41.852 "dhchap_key": "key3", 00:20:41.852 "allow_unrecognized_csi": false, 00:20:41.852 "method": "bdev_nvme_attach_controller", 00:20:41.852 "req_id": 1 00:20:41.852 } 00:20:41.852 Got JSON-RPC error response 00:20:41.852 response: 00:20:41.852 { 00:20:41.852 "code": -5, 00:20:41.852 "message": "Input/output error" 00:20:41.852 } 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.852 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.110 13:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:42.676 request: 00:20:42.676 { 00:20:42.676 "name": "nvme0", 00:20:42.676 "trtype": "tcp", 00:20:42.676 "traddr": "10.0.0.2", 00:20:42.676 "adrfam": "ipv4", 00:20:42.676 "trsvcid": "4420", 00:20:42.676 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:42.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:42.676 "prchk_reftag": false, 00:20:42.676 "prchk_guard": false, 00:20:42.676 "hdgst": false, 00:20:42.676 "ddgst": false, 00:20:42.676 "dhchap_key": "key0", 00:20:42.677 "dhchap_ctrlr_key": "key1", 00:20:42.677 "allow_unrecognized_csi": false, 00:20:42.677 "method": "bdev_nvme_attach_controller", 00:20:42.677 "req_id": 1 00:20:42.677 } 00:20:42.677 Got JSON-RPC error response 00:20:42.677 response: 00:20:42.677 { 00:20:42.677 "code": -5, 00:20:42.677 "message": "Input/output error" 00:20:42.677 } 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:42.677 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:43.242 nvme0n1 00:20:43.242 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:43.242 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.242 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:43.500 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.500 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.501 13:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:43.784 13:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:45.159 nvme0n1 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.159 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:45.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:45.417 13:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.675 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.675 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:45.675 13:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --hostid 21b7cb46-a602-e411-a339-001e67bc3be4 -l 0 --dhchap-secret DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: --dhchap-ctrl-secret DHHC-1:03:ZjdmMjVmMjEwZTcwOWEzYjFhMWViNzY5ZjFkNGEyYmM5M2QxMjJkYzU3NzYwOTliYzJjYzg4NWQ2YjFkMWY4Mpwai7g=: 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:46.606 13:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:47.540 request: 00:20:47.540 { 00:20:47.540 "name": "nvme0", 00:20:47.540 "trtype": "tcp", 00:20:47.540 "traddr": "10.0.0.2", 00:20:47.540 "adrfam": "ipv4", 00:20:47.540 "trsvcid": "4420", 00:20:47.540 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.540 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4", 00:20:47.540 "prchk_reftag": false, 00:20:47.540 "prchk_guard": false, 00:20:47.540 "hdgst": false, 00:20:47.541 "ddgst": false, 00:20:47.541 "dhchap_key": "key1", 00:20:47.541 "allow_unrecognized_csi": false, 00:20:47.541 "method": "bdev_nvme_attach_controller", 00:20:47.541 "req_id": 1 00:20:47.541 } 00:20:47.541 Got JSON-RPC error response 00:20:47.541 response: 00:20:47.541 { 00:20:47.541 "code": -5, 00:20:47.541 "message": "Input/output error" 00:20:47.541 } 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:47.541 13:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:48.917 nvme0n1 00:20:48.917 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:48.917 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:48.917 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.175 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.175 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.175 13:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:49.432 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:49.690 nvme0n1 00:20:49.690 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:49.690 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:49.690 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.948 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.948 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.948 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: '' 2s 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: ]] 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDJhNzYzNzg3MTFlYTMxYTU5YjhjYTc1OTZhYWRjNTIcfcgz: 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:50.514 13:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: 2s 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: ]] 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YWYxZjI5NWE4ZjhiMzkxOTZjNmMwYjUxMGU5MGQzN2I3ODk3NTdiM2Q2YjE4NWMyZQOurg==: 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:52.411 13:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:20:54.307 13:31:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:54.565 13:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:55.937 nvme0n1 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:55.937 13:31:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:56.502 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:56.502 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:56.502 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:56.759 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:57.326 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:57.326 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:57.326 13:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:57.326 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:58.258 request: 00:20:58.258 { 00:20:58.258 "name": "nvme0", 00:20:58.258 "dhchap_key": "key1", 00:20:58.258 "dhchap_ctrlr_key": "key3", 00:20:58.258 "method": "bdev_nvme_set_keys", 00:20:58.258 "req_id": 1 00:20:58.258 } 00:20:58.258 Got JSON-RPC error response 00:20:58.258 response: 00:20:58.258 { 00:20:58.258 "code": -13, 00:20:58.258 "message": "Permission denied" 00:20:58.258 } 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:58.258 13:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.516 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:58.516 13:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:59.449 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:59.449 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:59.449 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:59.706 13:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:01.079 nvme0n1 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:01.336 13:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:02.270 request: 00:21:02.270 { 00:21:02.270 "name": "nvme0", 00:21:02.270 "dhchap_key": "key2", 00:21:02.270 "dhchap_ctrlr_key": "key0", 00:21:02.270 "method": "bdev_nvme_set_keys", 00:21:02.270 "req_id": 1 00:21:02.270 } 00:21:02.270 Got JSON-RPC error response 00:21:02.270 response: 00:21:02.270 { 00:21:02.270 "code": -13, 00:21:02.270 "message": "Permission denied" 00:21:02.270 } 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:02.270 13:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:03.644 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:03.644 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:03.644 13:31:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1793269 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1793269 ']' 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1793269 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1793269 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1793269' 00:21:03.644 killing process with pid 1793269 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1793269 00:21:03.644 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1793269 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.209 rmmod nvme_tcp 00:21:04.209 rmmod nvme_fabrics 00:21:04.209 rmmod nvme_keyring 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@515 -- # '[' -n 1815160 ']' 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # killprocess 1815160 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 1815160 ']' 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 1815160 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1815160 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:04.209 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1815160' 00:21:04.210 killing process with pid 1815160 00:21:04.210 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 1815160 00:21:04.210 13:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 1815160 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-save 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@789 -- # iptables-restore 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:04.467 13:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.005 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.3t4 /tmp/spdk.key-sha256.556 /tmp/spdk.key-sha384.Kc5 /tmp/spdk.key-sha512.xV2 /tmp/spdk.key-sha512.88M /tmp/spdk.key-sha384.p9j /tmp/spdk.key-sha256.QtK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:07.006 00:21:07.006 real 3m30.581s 00:21:07.006 user 8m14.961s 00:21:07.006 sys 0m27.566s 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.006 ************************************ 00:21:07.006 END TEST nvmf_auth_target 00:21:07.006 ************************************ 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:07.006 ************************************ 00:21:07.006 START TEST nvmf_bdevio_no_huge 00:21:07.006 ************************************ 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:07.006 * Looking for test storage... 00:21:07.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:07.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.006 --rc genhtml_branch_coverage=1 00:21:07.006 --rc genhtml_function_coverage=1 00:21:07.006 --rc genhtml_legend=1 00:21:07.006 --rc geninfo_all_blocks=1 00:21:07.006 --rc geninfo_unexecuted_blocks=1 00:21:07.006 00:21:07.006 ' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:07.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.006 --rc genhtml_branch_coverage=1 00:21:07.006 --rc genhtml_function_coverage=1 00:21:07.006 --rc genhtml_legend=1 00:21:07.006 --rc geninfo_all_blocks=1 00:21:07.006 --rc geninfo_unexecuted_blocks=1 00:21:07.006 00:21:07.006 ' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:07.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.006 --rc genhtml_branch_coverage=1 00:21:07.006 --rc genhtml_function_coverage=1 00:21:07.006 --rc genhtml_legend=1 00:21:07.006 --rc geninfo_all_blocks=1 00:21:07.006 --rc geninfo_unexecuted_blocks=1 00:21:07.006 00:21:07.006 ' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:07.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.006 --rc genhtml_branch_coverage=1 00:21:07.006 --rc genhtml_function_coverage=1 00:21:07.006 --rc genhtml_legend=1 00:21:07.006 --rc geninfo_all_blocks=1 00:21:07.006 --rc geninfo_unexecuted_blocks=1 00:21:07.006 00:21:07.006 ' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.006 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:07.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:07.007 13:31:48 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.913 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:08.914 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:08.914 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:08.914 Found net devices under 0000:09:00.0: cvl_0_0 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:08.914 Found net devices under 0000:09:00.1: cvl_0_1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # is_hw=yes 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:08.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:08.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:21:08.914 00:21:08.914 --- 10.0.0.2 ping statistics --- 00:21:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.914 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:08.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:08.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:21:08.914 00:21:08.914 --- 10.0.0.1 ping statistics --- 00:21:08.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:08.914 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # return 0 00:21:08.914 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # nvmfpid=1820175 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # waitforlisten 1820175 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 1820175 ']' 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:08.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:08.915 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:08.915 [2024-10-07 13:31:50.600645] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:08.915 [2024-10-07 13:31:50.600770] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:09.173 [2024-10-07 13:31:50.672256] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:09.174 [2024-10-07 13:31:50.779834] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:09.174 [2024-10-07 13:31:50.779902] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:09.174 [2024-10-07 13:31:50.779932] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:09.174 [2024-10-07 13:31:50.779943] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:09.174 [2024-10-07 13:31:50.779953] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:09.174 [2024-10-07 13:31:50.780983] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:21:09.174 [2024-10-07 13:31:50.781065] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 6 00:21:09.174 [2024-10-07 13:31:50.781069] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:21:09.174 [2024-10-07 13:31:50.781036] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 5 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 [2024-10-07 13:31:50.939328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 Malloc0 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.432 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:09.433 [2024-10-07 13:31:50.977442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # config=() 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # local subsystem config 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:21:09.433 { 00:21:09.433 "params": { 00:21:09.433 "name": "Nvme$subsystem", 00:21:09.433 "trtype": "$TEST_TRANSPORT", 00:21:09.433 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:09.433 "adrfam": "ipv4", 00:21:09.433 "trsvcid": "$NVMF_PORT", 00:21:09.433 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:09.433 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:09.433 "hdgst": ${hdgst:-false}, 00:21:09.433 "ddgst": ${ddgst:-false} 00:21:09.433 }, 00:21:09.433 "method": "bdev_nvme_attach_controller" 00:21:09.433 } 00:21:09.433 EOF 00:21:09.433 )") 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # cat 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # jq . 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@583 -- # IFS=, 00:21:09.433 13:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:21:09.433 "params": { 00:21:09.433 "name": "Nvme1", 00:21:09.433 "trtype": "tcp", 00:21:09.433 "traddr": "10.0.0.2", 00:21:09.433 "adrfam": "ipv4", 00:21:09.433 "trsvcid": "4420", 00:21:09.433 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:09.433 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:09.433 "hdgst": false, 00:21:09.433 "ddgst": false 00:21:09.433 }, 00:21:09.433 "method": "bdev_nvme_attach_controller" 00:21:09.433 }' 00:21:09.433 [2024-10-07 13:31:51.029888] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:09.433 [2024-10-07 13:31:51.029996] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1820207 ] 00:21:09.433 [2024-10-07 13:31:51.096252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:09.691 [2024-10-07 13:31:51.213245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.691 [2024-10-07 13:31:51.213301] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:09.691 [2024-10-07 13:31:51.213305] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:21:09.948 I/O targets: 00:21:09.948 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:09.948 00:21:09.948 00:21:09.948 CUnit - A unit testing framework for C - Version 2.1-3 00:21:09.948 http://cunit.sourceforge.net/ 00:21:09.948 00:21:09.948 00:21:09.948 Suite: bdevio tests on: Nvme1n1 00:21:09.948 Test: blockdev write read block ...passed 00:21:09.948 Test: blockdev write zeroes read block ...passed 00:21:09.948 Test: blockdev write zeroes read no split ...passed 00:21:09.948 Test: blockdev write zeroes read split ...passed 00:21:09.949 Test: blockdev write zeroes read split partial ...passed 00:21:09.949 Test: blockdev reset ...[2024-10-07 13:31:51.563615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:09.949 [2024-10-07 13:31:51.563726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e8dc0 (9): Bad file descriptor 00:21:09.949 [2024-10-07 13:31:51.621858] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:09.949 passed 00:21:09.949 Test: blockdev write read 8 blocks ...passed 00:21:09.949 Test: blockdev write read size > 128k ...passed 00:21:09.949 Test: blockdev write read invalid size ...passed 00:21:10.206 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:10.206 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:10.206 Test: blockdev write read max offset ...passed 00:21:10.206 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:10.206 Test: blockdev writev readv 8 blocks ...passed 00:21:10.206 Test: blockdev writev readv 30 x 1block ...passed 00:21:10.206 Test: blockdev writev readv block ...passed 00:21:10.206 Test: blockdev writev readv size > 128k ...passed 00:21:10.206 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:10.206 Test: blockdev comparev and writev ...[2024-10-07 13:31:51.836836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.836870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.836894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.836910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.837223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.837269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.837284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.837583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.837606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.837628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.837643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.837958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.837982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:10.206 [2024-10-07 13:31:51.838003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:10.206 [2024-10-07 13:31:51.838019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:10.206 passed 00:21:10.464 Test: blockdev nvme passthru rw ...passed 00:21:10.464 Test: blockdev nvme passthru vendor specific ...[2024-10-07 13:31:51.921956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.464 [2024-10-07 13:31:51.921984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:10.464 [2024-10-07 13:31:51.922127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.464 [2024-10-07 13:31:51.922151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:10.464 [2024-10-07 13:31:51.922294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.464 [2024-10-07 13:31:51.922317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:10.464 [2024-10-07 13:31:51.922454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.464 [2024-10-07 13:31:51.922483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:10.464 passed 00:21:10.464 Test: blockdev nvme admin passthru ...passed 00:21:10.464 Test: blockdev copy ...passed 00:21:10.464 00:21:10.464 Run Summary: Type Total Ran Passed Failed Inactive 00:21:10.464 suites 1 1 n/a 0 0 00:21:10.464 tests 23 23 23 0 0 00:21:10.464 asserts 152 152 152 0 n/a 00:21:10.464 00:21:10.464 Elapsed time = 1.143 seconds 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # nvmfcleanup 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:10.723 rmmod nvme_tcp 00:21:10.723 rmmod nvme_fabrics 00:21:10.723 rmmod nvme_keyring 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@515 -- # '[' -n 1820175 ']' 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # killprocess 1820175 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 1820175 ']' 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 1820175 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.723 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1820175 00:21:10.981 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:21:10.981 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:21:10.981 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1820175' 00:21:10.981 killing process with pid 1820175 00:21:10.981 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 1820175 00:21:10.981 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 1820175 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-save 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@789 -- # iptables-restore 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.241 13:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:13.812 00:21:13.812 real 0m6.755s 00:21:13.812 user 0m10.951s 00:21:13.812 sys 0m2.674s 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:13.812 ************************************ 00:21:13.812 END TEST nvmf_bdevio_no_huge 00:21:13.812 ************************************ 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:13.812 ************************************ 00:21:13.812 START TEST nvmf_tls 00:21:13.812 ************************************ 00:21:13.812 13:31:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:13.812 * Looking for test storage... 00:21:13.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.812 --rc genhtml_branch_coverage=1 00:21:13.812 --rc genhtml_function_coverage=1 00:21:13.812 --rc genhtml_legend=1 00:21:13.812 --rc geninfo_all_blocks=1 00:21:13.812 --rc geninfo_unexecuted_blocks=1 00:21:13.812 00:21:13.812 ' 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.812 --rc genhtml_branch_coverage=1 00:21:13.812 --rc genhtml_function_coverage=1 00:21:13.812 --rc genhtml_legend=1 00:21:13.812 --rc geninfo_all_blocks=1 00:21:13.812 --rc geninfo_unexecuted_blocks=1 00:21:13.812 00:21:13.812 ' 00:21:13.812 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:13.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.812 --rc genhtml_branch_coverage=1 00:21:13.813 --rc genhtml_function_coverage=1 00:21:13.813 --rc genhtml_legend=1 00:21:13.813 --rc geninfo_all_blocks=1 00:21:13.813 --rc geninfo_unexecuted_blocks=1 00:21:13.813 00:21:13.813 ' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:13.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.813 --rc genhtml_branch_coverage=1 00:21:13.813 --rc genhtml_function_coverage=1 00:21:13.813 --rc genhtml_legend=1 00:21:13.813 --rc geninfo_all_blocks=1 00:21:13.813 --rc geninfo_unexecuted_blocks=1 00:21:13.813 00:21:13.813 ' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:13.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # prepare_net_devs 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # local -g is_hw=no 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # remove_spdk_ns 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:13.813 13:31:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:21:15.717 Found 0000:09:00.0 (0x8086 - 0x1592) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:21:15.717 Found 0000:09:00.1 (0x8086 - 0x1592) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:15.717 Found net devices under 0000:09:00.0: cvl_0_0 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ up == up ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:15.717 Found net devices under 0000:09:00.1: cvl_0_1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # is_hw=yes 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:15.717 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:15.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:15.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:21:15.718 00:21:15.718 --- 10.0.0.2 ping statistics --- 00:21:15.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.718 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:15.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:15.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:21:15.718 00:21:15.718 --- 10.0.0.1 ping statistics --- 00:21:15.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:15.718 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # return 0 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1822290 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1822290 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1822290 ']' 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.718 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.718 [2024-10-07 13:31:57.311861] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:15.718 [2024-10-07 13:31:57.311930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.718 [2024-10-07 13:31:57.371153] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.975 [2024-10-07 13:31:57.474679] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.975 [2024-10-07 13:31:57.474741] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.975 [2024-10-07 13:31:57.474756] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.975 [2024-10-07 13:31:57.474768] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.975 [2024-10-07 13:31:57.474778] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.975 [2024-10-07 13:31:57.475309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:15.975 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:16.233 true 00:21:16.233 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.233 13:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:16.491 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:16.491 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:16.491 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:16.749 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:16.749 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:17.007 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:17.265 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:17.265 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:17.524 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:17.524 13:31:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:17.782 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:17.782 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:17.782 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:17.782 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:18.040 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:18.040 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:18.040 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:18.297 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.297 13:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:18.555 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:18.555 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:18.555 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:18.813 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:18.813 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=ffeeddccbbaa99887766554433221100 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=1 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.7VwxzkbOdt 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.xW1iH6Yofa 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.7VwxzkbOdt 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.xW1iH6Yofa 00:21:19.071 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:19.330 13:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:19.895 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.7VwxzkbOdt 00:21:19.895 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7VwxzkbOdt 00:21:19.895 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:19.895 [2024-10-07 13:32:01.596337] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:20.153 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:20.412 13:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:20.670 [2024-10-07 13:32:02.133777] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:20.670 [2024-10-07 13:32:02.134087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:20.670 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:20.928 malloc0 00:21:20.928 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:21.186 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7VwxzkbOdt 00:21:21.443 13:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.701 13:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.7VwxzkbOdt 00:21:31.666 Initializing NVMe Controllers 00:21:31.666 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.666 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:31.666 Initialization complete. Launching workers. 00:21:31.666 ======================================================== 00:21:31.666 Latency(us) 00:21:31.666 Device Information : IOPS MiB/s Average min max 00:21:31.666 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8572.89 33.49 7467.53 1081.59 9746.95 00:21:31.666 ======================================================== 00:21:31.666 Total : 8572.89 33.49 7467.53 1081.59 9746.95 00:21:31.666 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7VwxzkbOdt 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7VwxzkbOdt 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1824109 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1824109 /var/tmp/bdevperf.sock 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1824109 ']' 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:31.666 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.923 [2024-10-07 13:32:13.407254] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:31.923 [2024-10-07 13:32:13.407330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824109 ] 00:21:31.924 [2024-10-07 13:32:13.461979] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.924 [2024-10-07 13:32:13.567309] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.181 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:32.181 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:32.181 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7VwxzkbOdt 00:21:32.438 13:32:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:32.696 [2024-10-07 13:32:14.195372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.696 TLSTESTn1 00:21:32.696 13:32:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:32.696 Running I/O for 10 seconds... 00:21:35.000 3180.00 IOPS, 12.42 MiB/s [2024-10-07T11:32:17.644Z] 3322.00 IOPS, 12.98 MiB/s [2024-10-07T11:32:18.587Z] 3333.33 IOPS, 13.02 MiB/s [2024-10-07T11:32:19.517Z] 3314.50 IOPS, 12.95 MiB/s [2024-10-07T11:32:20.449Z] 3334.40 IOPS, 13.03 MiB/s [2024-10-07T11:32:21.821Z] 3354.17 IOPS, 13.10 MiB/s [2024-10-07T11:32:22.753Z] 3361.86 IOPS, 13.13 MiB/s [2024-10-07T11:32:23.684Z] 3371.25 IOPS, 13.17 MiB/s [2024-10-07T11:32:24.618Z] 3377.22 IOPS, 13.19 MiB/s [2024-10-07T11:32:24.618Z] 3384.60 IOPS, 13.22 MiB/s 00:21:42.906 Latency(us) 00:21:42.906 [2024-10-07T11:32:24.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.906 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:42.906 Verification LBA range: start 0x0 length 0x2000 00:21:42.906 TLSTESTn1 : 10.02 3391.08 13.25 0.00 0.00 37687.35 6747.78 56312.41 00:21:42.906 [2024-10-07T11:32:24.618Z] =================================================================================================================== 00:21:42.906 [2024-10-07T11:32:24.618Z] Total : 3391.08 13.25 0.00 0.00 37687.35 6747.78 56312.41 00:21:42.906 { 00:21:42.906 "results": [ 00:21:42.906 { 00:21:42.906 "job": "TLSTESTn1", 00:21:42.906 "core_mask": "0x4", 00:21:42.906 "workload": "verify", 00:21:42.906 "status": "finished", 00:21:42.906 "verify_range": { 00:21:42.906 "start": 0, 00:21:42.906 "length": 8192 00:21:42.906 }, 00:21:42.906 "queue_depth": 128, 00:21:42.906 "io_size": 4096, 00:21:42.906 "runtime": 10.018357, 00:21:42.906 "iops": 3391.0750036158624, 00:21:42.906 "mibps": 13.246386732874463, 00:21:42.906 "io_failed": 0, 00:21:42.906 "io_timeout": 0, 00:21:42.906 "avg_latency_us": 37687.3549837289, 00:21:42.906 "min_latency_us": 6747.780740740741, 00:21:42.906 "max_latency_us": 56312.414814814816 00:21:42.906 } 00:21:42.906 ], 00:21:42.906 "core_count": 1 00:21:42.906 } 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1824109 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1824109 ']' 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1824109 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1824109 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1824109' 00:21:42.906 killing process with pid 1824109 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1824109 00:21:42.906 Received shutdown signal, test time was about 10.000000 seconds 00:21:42.906 00:21:42.906 Latency(us) 00:21:42.906 [2024-10-07T11:32:24.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.906 [2024-10-07T11:32:24.618Z] =================================================================================================================== 00:21:42.906 [2024-10-07T11:32:24.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.906 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1824109 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xW1iH6Yofa 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xW1iH6Yofa 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xW1iH6Yofa 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xW1iH6Yofa 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1825367 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1825367 /var/tmp/bdevperf.sock 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1825367 ']' 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.165 13:32:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:43.165 [2024-10-07 13:32:24.782974] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:43.165 [2024-10-07 13:32:24.783050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825367 ] 00:21:43.165 [2024-10-07 13:32:24.840755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.423 [2024-10-07 13:32:24.948660] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:43.423 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.423 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:43.423 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xW1iH6Yofa 00:21:43.680 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:43.938 [2024-10-07 13:32:25.572592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:43.938 [2024-10-07 13:32:25.581959] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:43.938 [2024-10-07 13:32:25.582757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18aa520 (107): Transport endpoint is not connected 00:21:43.938 [2024-10-07 13:32:25.583748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18aa520 (9): Bad file descriptor 00:21:43.938 [2024-10-07 13:32:25.584747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:43.938 [2024-10-07 13:32:25.584768] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:43.938 [2024-10-07 13:32:25.584783] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:43.938 [2024-10-07 13:32:25.584801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.938 request: 00:21:43.938 { 00:21:43.938 "name": "TLSTEST", 00:21:43.938 "trtype": "tcp", 00:21:43.938 "traddr": "10.0.0.2", 00:21:43.938 "adrfam": "ipv4", 00:21:43.938 "trsvcid": "4420", 00:21:43.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.938 "prchk_reftag": false, 00:21:43.938 "prchk_guard": false, 00:21:43.938 "hdgst": false, 00:21:43.938 "ddgst": false, 00:21:43.938 "psk": "key0", 00:21:43.938 "allow_unrecognized_csi": false, 00:21:43.938 "method": "bdev_nvme_attach_controller", 00:21:43.938 "req_id": 1 00:21:43.938 } 00:21:43.938 Got JSON-RPC error response 00:21:43.938 response: 00:21:43.938 { 00:21:43.938 "code": -5, 00:21:43.938 "message": "Input/output error" 00:21:43.938 } 00:21:43.938 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1825367 00:21:43.938 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1825367 ']' 00:21:43.938 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1825367 00:21:43.938 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825367 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825367' 00:21:43.939 killing process with pid 1825367 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1825367 00:21:43.939 Received shutdown signal, test time was about 10.000000 seconds 00:21:43.939 00:21:43.939 Latency(us) 00:21:43.939 [2024-10-07T11:32:25.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.939 [2024-10-07T11:32:25.651Z] =================================================================================================================== 00:21:43.939 [2024-10-07T11:32:25.651Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:43.939 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1825367 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7VwxzkbOdt 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7VwxzkbOdt 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.7VwxzkbOdt 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7VwxzkbOdt 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1825508 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1825508 /var/tmp/bdevperf.sock 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1825508 ']' 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:44.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:44.197 13:32:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.455 [2024-10-07 13:32:25.925274] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:44.455 [2024-10-07 13:32:25.925347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825508 ] 00:21:44.455 [2024-10-07 13:32:25.982528] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:44.455 [2024-10-07 13:32:26.092208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:44.712 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.712 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:44.713 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7VwxzkbOdt 00:21:44.972 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:45.256 [2024-10-07 13:32:26.703481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:45.256 [2024-10-07 13:32:26.714813] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:45.256 [2024-10-07 13:32:26.714843] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:45.256 [2024-10-07 13:32:26.714897] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:45.256 [2024-10-07 13:32:26.715861] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d1520 (107): Transport endpoint is not connected 00:21:45.256 [2024-10-07 13:32:26.716851] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d1520 (9): Bad file descriptor 00:21:45.256 [2024-10-07 13:32:26.717850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:45.256 [2024-10-07 13:32:26.717874] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:45.256 [2024-10-07 13:32:26.717889] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:45.256 [2024-10-07 13:32:26.717906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:45.256 request: 00:21:45.256 { 00:21:45.256 "name": "TLSTEST", 00:21:45.256 "trtype": "tcp", 00:21:45.256 "traddr": "10.0.0.2", 00:21:45.256 "adrfam": "ipv4", 00:21:45.256 "trsvcid": "4420", 00:21:45.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.256 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:45.256 "prchk_reftag": false, 00:21:45.256 "prchk_guard": false, 00:21:45.256 "hdgst": false, 00:21:45.256 "ddgst": false, 00:21:45.256 "psk": "key0", 00:21:45.256 "allow_unrecognized_csi": false, 00:21:45.256 "method": "bdev_nvme_attach_controller", 00:21:45.256 "req_id": 1 00:21:45.256 } 00:21:45.256 Got JSON-RPC error response 00:21:45.256 response: 00:21:45.256 { 00:21:45.256 "code": -5, 00:21:45.256 "message": "Input/output error" 00:21:45.256 } 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1825508 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1825508 ']' 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1825508 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825508 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825508' 00:21:45.256 killing process with pid 1825508 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1825508 00:21:45.256 Received shutdown signal, test time was about 10.000000 seconds 00:21:45.256 00:21:45.256 Latency(us) 00:21:45.256 [2024-10-07T11:32:26.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:45.256 [2024-10-07T11:32:26.968Z] =================================================================================================================== 00:21:45.256 [2024-10-07T11:32:26.968Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:45.256 13:32:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1825508 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7VwxzkbOdt 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7VwxzkbOdt 00:21:45.518 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.7VwxzkbOdt 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7VwxzkbOdt 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1825643 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1825643 /var/tmp/bdevperf.sock 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1825643 ']' 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:45.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:45.519 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:45.519 [2024-10-07 13:32:27.089064] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:45.519 [2024-10-07 13:32:27.089143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825643 ] 00:21:45.519 [2024-10-07 13:32:27.145227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.777 [2024-10-07 13:32:27.255253] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.777 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:45.777 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:45.777 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7VwxzkbOdt 00:21:46.035 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.293 [2024-10-07 13:32:27.928381] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.293 [2024-10-07 13:32:27.938062] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:46.293 [2024-10-07 13:32:27.938091] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:46.293 [2024-10-07 13:32:27.938145] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:46.293 [2024-10-07 13:32:27.938579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f67520 (107): Transport endpoint is not connected 00:21:46.293 [2024-10-07 13:32:27.939569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f67520 (9): Bad file descriptor 00:21:46.293 [2024-10-07 13:32:27.940567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:46.293 [2024-10-07 13:32:27.940587] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:46.293 [2024-10-07 13:32:27.940616] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:46.293 [2024-10-07 13:32:27.940634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:46.293 request: 00:21:46.293 { 00:21:46.293 "name": "TLSTEST", 00:21:46.293 "trtype": "tcp", 00:21:46.293 "traddr": "10.0.0.2", 00:21:46.293 "adrfam": "ipv4", 00:21:46.293 "trsvcid": "4420", 00:21:46.293 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:46.293 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:46.293 "prchk_reftag": false, 00:21:46.293 "prchk_guard": false, 00:21:46.293 "hdgst": false, 00:21:46.293 "ddgst": false, 00:21:46.293 "psk": "key0", 00:21:46.293 "allow_unrecognized_csi": false, 00:21:46.293 "method": "bdev_nvme_attach_controller", 00:21:46.293 "req_id": 1 00:21:46.293 } 00:21:46.293 Got JSON-RPC error response 00:21:46.293 response: 00:21:46.293 { 00:21:46.293 "code": -5, 00:21:46.293 "message": "Input/output error" 00:21:46.293 } 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1825643 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1825643 ']' 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1825643 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.293 13:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825643 00:21:46.293 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:46.293 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:46.293 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825643' 00:21:46.293 killing process with pid 1825643 00:21:46.293 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1825643 00:21:46.293 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.293 00:21:46.293 Latency(us) 00:21:46.293 [2024-10-07T11:32:28.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.293 [2024-10-07T11:32:28.005Z] =================================================================================================================== 00:21:46.293 [2024-10-07T11:32:28.005Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:46.293 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1825643 00:21:46.551 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:46.551 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:46.551 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:46.551 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1825782 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1825782 /var/tmp/bdevperf.sock 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1825782 ']' 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.812 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.812 [2024-10-07 13:32:28.315469] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:46.812 [2024-10-07 13:32:28.315544] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825782 ] 00:21:46.812 [2024-10-07 13:32:28.371690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.812 [2024-10-07 13:32:28.483311] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.071 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.071 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:47.071 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:47.328 [2024-10-07 13:32:28.902684] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:47.328 [2024-10-07 13:32:28.902728] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:47.328 request: 00:21:47.328 { 00:21:47.328 "name": "key0", 00:21:47.328 "path": "", 00:21:47.328 "method": "keyring_file_add_key", 00:21:47.328 "req_id": 1 00:21:47.328 } 00:21:47.328 Got JSON-RPC error response 00:21:47.328 response: 00:21:47.328 { 00:21:47.328 "code": -1, 00:21:47.328 "message": "Operation not permitted" 00:21:47.328 } 00:21:47.328 13:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:47.586 [2024-10-07 13:32:29.167481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:47.586 [2024-10-07 13:32:29.167541] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:47.586 request: 00:21:47.586 { 00:21:47.586 "name": "TLSTEST", 00:21:47.586 "trtype": "tcp", 00:21:47.586 "traddr": "10.0.0.2", 00:21:47.586 "adrfam": "ipv4", 00:21:47.586 "trsvcid": "4420", 00:21:47.586 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.586 "prchk_reftag": false, 00:21:47.586 "prchk_guard": false, 00:21:47.586 "hdgst": false, 00:21:47.586 "ddgst": false, 00:21:47.586 "psk": "key0", 00:21:47.586 "allow_unrecognized_csi": false, 00:21:47.586 "method": "bdev_nvme_attach_controller", 00:21:47.586 "req_id": 1 00:21:47.586 } 00:21:47.586 Got JSON-RPC error response 00:21:47.586 response: 00:21:47.586 { 00:21:47.586 "code": -126, 00:21:47.586 "message": "Required key not available" 00:21:47.586 } 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1825782 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1825782 ']' 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1825782 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1825782 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1825782' 00:21:47.586 killing process with pid 1825782 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1825782 00:21:47.586 Received shutdown signal, test time was about 10.000000 seconds 00:21:47.586 00:21:47.586 Latency(us) 00:21:47.586 [2024-10-07T11:32:29.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:47.586 [2024-10-07T11:32:29.298Z] =================================================================================================================== 00:21:47.586 [2024-10-07T11:32:29.298Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:47.586 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1825782 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1822290 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1822290 ']' 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1822290 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1822290 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1822290' 00:21:47.844 killing process with pid 1822290 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1822290 00:21:47.844 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1822290 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@741 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # local prefix key digest 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # prefix=NVMeTLSkey-1 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # digest=2 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@731 -- # python - 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.p85alnbf81 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.p85alnbf81 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1826040 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1826040 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1826040 ']' 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.102 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.360 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.360 13:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.360 [2024-10-07 13:32:29.864535] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:48.360 [2024-10-07 13:32:29.864643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.360 [2024-10-07 13:32:29.925953] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.360 [2024-10-07 13:32:30.043349] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.360 [2024-10-07 13:32:30.043436] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.360 [2024-10-07 13:32:30.043450] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.360 [2024-10-07 13:32:30.043460] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.360 [2024-10-07 13:32:30.043485] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.360 [2024-10-07 13:32:30.044092] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p85alnbf81 00:21:48.618 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:48.876 [2024-10-07 13:32:30.482305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:48.876 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.134 13:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.392 [2024-10-07 13:32:31.039820] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.392 [2024-10-07 13:32:31.040130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.392 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.971 malloc0 00:21:49.971 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.228 13:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:21:50.486 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p85alnbf81 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p85alnbf81 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1826323 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1826323 /var/tmp/bdevperf.sock 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1826323 ']' 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:50.745 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.745 [2024-10-07 13:32:32.364441] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:21:50.745 [2024-10-07 13:32:32.364529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826323 ] 00:21:50.745 [2024-10-07 13:32:32.418921] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.003 [2024-10-07 13:32:32.525733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.003 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.003 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:51.003 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:21:51.260 13:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.518 [2024-10-07 13:32:33.146589] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.518 TLSTESTn1 00:21:51.776 13:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:51.776 Running I/O for 10 seconds... 00:21:54.080 3301.00 IOPS, 12.89 MiB/s [2024-10-07T11:32:36.724Z] 3369.50 IOPS, 13.16 MiB/s [2024-10-07T11:32:37.656Z] 3395.33 IOPS, 13.26 MiB/s [2024-10-07T11:32:38.587Z] 3396.00 IOPS, 13.27 MiB/s [2024-10-07T11:32:39.521Z] 3389.40 IOPS, 13.24 MiB/s [2024-10-07T11:32:40.454Z] 3379.17 IOPS, 13.20 MiB/s [2024-10-07T11:32:41.826Z] 3370.86 IOPS, 13.17 MiB/s [2024-10-07T11:32:42.391Z] 3366.50 IOPS, 13.15 MiB/s [2024-10-07T11:32:43.764Z] 3369.67 IOPS, 13.16 MiB/s [2024-10-07T11:32:43.764Z] 3363.60 IOPS, 13.14 MiB/s 00:22:02.052 Latency(us) 00:22:02.052 [2024-10-07T11:32:43.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:02.052 Verification LBA range: start 0x0 length 0x2000 00:22:02.052 TLSTESTn1 : 10.02 3369.79 13.16 0.00 0.00 37926.23 6407.96 31845.64 00:22:02.052 [2024-10-07T11:32:43.764Z] =================================================================================================================== 00:22:02.052 [2024-10-07T11:32:43.765Z] Total : 3369.79 13.16 0.00 0.00 37926.23 6407.96 31845.64 00:22:02.053 { 00:22:02.053 "results": [ 00:22:02.053 { 00:22:02.053 "job": "TLSTESTn1", 00:22:02.053 "core_mask": "0x4", 00:22:02.053 "workload": "verify", 00:22:02.053 "status": "finished", 00:22:02.053 "verify_range": { 00:22:02.053 "start": 0, 00:22:02.053 "length": 8192 00:22:02.053 }, 00:22:02.053 "queue_depth": 128, 00:22:02.053 "io_size": 4096, 00:22:02.053 "runtime": 10.019618, 00:22:02.053 "iops": 3369.78914765014, 00:22:02.053 "mibps": 13.16323885800836, 00:22:02.053 "io_failed": 0, 00:22:02.053 "io_timeout": 0, 00:22:02.053 "avg_latency_us": 37926.22570294024, 00:22:02.053 "min_latency_us": 6407.964444444445, 00:22:02.053 "max_latency_us": 31845.64148148148 00:22:02.053 } 00:22:02.053 ], 00:22:02.053 "core_count": 1 00:22:02.053 } 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1826323 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1826323 ']' 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1826323 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1826323 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1826323' 00:22:02.053 killing process with pid 1826323 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1826323 00:22:02.053 Received shutdown signal, test time was about 10.000000 seconds 00:22:02.053 00:22:02.053 Latency(us) 00:22:02.053 [2024-10-07T11:32:43.765Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:02.053 [2024-10-07T11:32:43.765Z] =================================================================================================================== 00:22:02.053 [2024-10-07T11:32:43.765Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1826323 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.p85alnbf81 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p85alnbf81 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p85alnbf81 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.p85alnbf81 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.p85alnbf81 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1827578 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1827578 /var/tmp/bdevperf.sock 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1827578 ']' 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.053 13:32:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.311 [2024-10-07 13:32:43.795292] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:02.311 [2024-10-07 13:32:43.795376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827578 ] 00:22:02.311 [2024-10-07 13:32:43.850445] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.311 [2024-10-07 13:32:43.963546] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.573 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:02.573 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:02.573 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:02.830 [2024-10-07 13:32:44.340182] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.p85alnbf81': 0100666 00:22:02.830 [2024-10-07 13:32:44.340221] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:02.830 request: 00:22:02.830 { 00:22:02.830 "name": "key0", 00:22:02.830 "path": "/tmp/tmp.p85alnbf81", 00:22:02.830 "method": "keyring_file_add_key", 00:22:02.830 "req_id": 1 00:22:02.830 } 00:22:02.830 Got JSON-RPC error response 00:22:02.830 response: 00:22:02.830 { 00:22:02.830 "code": -1, 00:22:02.830 "message": "Operation not permitted" 00:22:02.830 } 00:22:02.830 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.088 [2024-10-07 13:32:44.609022] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.088 [2024-10-07 13:32:44.609078] bdev_nvme.c:6391:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:03.088 request: 00:22:03.088 { 00:22:03.088 "name": "TLSTEST", 00:22:03.088 "trtype": "tcp", 00:22:03.088 "traddr": "10.0.0.2", 00:22:03.088 "adrfam": "ipv4", 00:22:03.088 "trsvcid": "4420", 00:22:03.088 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.088 "prchk_reftag": false, 00:22:03.088 "prchk_guard": false, 00:22:03.088 "hdgst": false, 00:22:03.088 "ddgst": false, 00:22:03.088 "psk": "key0", 00:22:03.088 "allow_unrecognized_csi": false, 00:22:03.088 "method": "bdev_nvme_attach_controller", 00:22:03.088 "req_id": 1 00:22:03.088 } 00:22:03.088 Got JSON-RPC error response 00:22:03.088 response: 00:22:03.088 { 00:22:03.088 "code": -126, 00:22:03.088 "message": "Required key not available" 00:22:03.088 } 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1827578 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1827578 ']' 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1827578 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1827578 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1827578' 00:22:03.088 killing process with pid 1827578 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1827578 00:22:03.088 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.088 00:22:03.088 Latency(us) 00:22:03.088 [2024-10-07T11:32:44.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.088 [2024-10-07T11:32:44.800Z] =================================================================================================================== 00:22:03.088 [2024-10-07T11:32:44.800Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:03.088 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1827578 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1826040 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1826040 ']' 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1826040 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1826040 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1826040' 00:22:03.345 killing process with pid 1826040 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1826040 00:22:03.345 13:32:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1826040 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1827727 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1827727 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1827727 ']' 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.603 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.603 [2024-10-07 13:32:45.307513] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:03.603 [2024-10-07 13:32:45.307607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.862 [2024-10-07 13:32:45.371140] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.862 [2024-10-07 13:32:45.478632] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.862 [2024-10-07 13:32:45.478718] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.862 [2024-10-07 13:32:45.478742] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.862 [2024-10-07 13:32:45.478753] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.862 [2024-10-07 13:32:45.478762] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.862 [2024-10-07 13:32:45.479280] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p85alnbf81 00:22:04.120 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:04.377 [2024-10-07 13:32:45.924231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.377 13:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:04.635 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:04.893 [2024-10-07 13:32:46.569928] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:04.893 [2024-10-07 13:32:46.570214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.893 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:05.151 malloc0 00:22:05.151 13:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:05.716 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:05.716 [2024-10-07 13:32:47.375649] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.p85alnbf81': 0100666 00:22:05.716 [2024-10-07 13:32:47.375711] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:05.716 request: 00:22:05.716 { 00:22:05.716 "name": "key0", 00:22:05.716 "path": "/tmp/tmp.p85alnbf81", 00:22:05.716 "method": "keyring_file_add_key", 00:22:05.716 "req_id": 1 00:22:05.716 } 00:22:05.716 Got JSON-RPC error response 00:22:05.716 response: 00:22:05.716 { 00:22:05.716 "code": -1, 00:22:05.716 "message": "Operation not permitted" 00:22:05.716 } 00:22:05.716 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:05.974 [2024-10-07 13:32:47.640443] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:05.974 [2024-10-07 13:32:47.640510] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:05.974 request: 00:22:05.974 { 00:22:05.974 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:05.974 "host": "nqn.2016-06.io.spdk:host1", 00:22:05.975 "psk": "key0", 00:22:05.975 "method": "nvmf_subsystem_add_host", 00:22:05.975 "req_id": 1 00:22:05.975 } 00:22:05.975 Got JSON-RPC error response 00:22:05.975 response: 00:22:05.975 { 00:22:05.975 "code": -32603, 00:22:05.975 "message": "Internal error" 00:22:05.975 } 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1827727 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1827727 ']' 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1827727 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:05.975 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1827727 00:22:06.232 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:06.232 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:06.232 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1827727' 00:22:06.232 killing process with pid 1827727 00:22:06.232 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1827727 00:22:06.232 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1827727 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.p85alnbf81 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1828127 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1828127 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1828127 ']' 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:06.491 13:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.491 [2024-10-07 13:32:48.018176] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:06.491 [2024-10-07 13:32:48.018271] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.491 [2024-10-07 13:32:48.078124] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.491 [2024-10-07 13:32:48.174380] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.491 [2024-10-07 13:32:48.174444] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.491 [2024-10-07 13:32:48.174467] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.491 [2024-10-07 13:32:48.174477] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.491 [2024-10-07 13:32:48.174487] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.491 [2024-10-07 13:32:48.175066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p85alnbf81 00:22:06.749 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:07.012 [2024-10-07 13:32:48.556824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.013 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.271 13:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.528 [2024-10-07 13:32:49.090309] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.529 [2024-10-07 13:32:49.090558] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.529 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.787 malloc0 00:22:07.787 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:08.045 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:08.304 13:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1828401 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1828401 /var/tmp/bdevperf.sock 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1828401 ']' 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:08.561 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:08.561 [2024-10-07 13:32:50.244378] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:08.561 [2024-10-07 13:32:50.244471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828401 ] 00:22:08.819 [2024-10-07 13:32:50.301783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:08.819 [2024-10-07 13:32:50.410707] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.819 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:08.819 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:08.819 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:09.384 13:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:09.384 [2024-10-07 13:32:51.056593] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.641 TLSTESTn1 00:22:09.642 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:09.908 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:09.908 "subsystems": [ 00:22:09.908 { 00:22:09.908 "subsystem": "keyring", 00:22:09.908 "config": [ 00:22:09.908 { 00:22:09.908 "method": "keyring_file_add_key", 00:22:09.908 "params": { 00:22:09.908 "name": "key0", 00:22:09.908 "path": "/tmp/tmp.p85alnbf81" 00:22:09.908 } 00:22:09.908 } 00:22:09.908 ] 00:22:09.908 }, 00:22:09.908 { 00:22:09.908 "subsystem": "iobuf", 00:22:09.908 "config": [ 00:22:09.908 { 00:22:09.908 "method": "iobuf_set_options", 00:22:09.908 "params": { 00:22:09.908 "small_pool_count": 8192, 00:22:09.908 "large_pool_count": 1024, 00:22:09.908 "small_bufsize": 8192, 00:22:09.908 "large_bufsize": 135168 00:22:09.908 } 00:22:09.908 } 00:22:09.908 ] 00:22:09.908 }, 00:22:09.908 { 00:22:09.908 "subsystem": "sock", 00:22:09.908 "config": [ 00:22:09.908 { 00:22:09.908 "method": "sock_set_default_impl", 00:22:09.908 "params": { 00:22:09.908 "impl_name": "posix" 00:22:09.908 } 00:22:09.908 }, 00:22:09.908 { 00:22:09.908 "method": "sock_impl_set_options", 00:22:09.908 "params": { 00:22:09.908 "impl_name": "ssl", 00:22:09.908 "recv_buf_size": 4096, 00:22:09.908 "send_buf_size": 4096, 00:22:09.908 "enable_recv_pipe": true, 00:22:09.908 "enable_quickack": false, 00:22:09.908 "enable_placement_id": 0, 00:22:09.908 "enable_zerocopy_send_server": true, 00:22:09.908 "enable_zerocopy_send_client": false, 00:22:09.908 "zerocopy_threshold": 0, 00:22:09.908 "tls_version": 0, 00:22:09.908 "enable_ktls": false 00:22:09.908 } 00:22:09.908 }, 00:22:09.908 { 00:22:09.908 "method": "sock_impl_set_options", 00:22:09.908 "params": { 00:22:09.908 "impl_name": "posix", 00:22:09.908 "recv_buf_size": 2097152, 00:22:09.909 "send_buf_size": 2097152, 00:22:09.909 "enable_recv_pipe": true, 00:22:09.909 "enable_quickack": false, 00:22:09.909 "enable_placement_id": 0, 00:22:09.909 "enable_zerocopy_send_server": true, 00:22:09.909 "enable_zerocopy_send_client": false, 00:22:09.909 "zerocopy_threshold": 0, 00:22:09.909 "tls_version": 0, 00:22:09.909 "enable_ktls": false 00:22:09.909 } 00:22:09.909 } 00:22:09.909 ] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "vmd", 00:22:09.909 "config": [] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "accel", 00:22:09.909 "config": [ 00:22:09.909 { 00:22:09.909 "method": "accel_set_options", 00:22:09.909 "params": { 00:22:09.909 "small_cache_size": 128, 00:22:09.909 "large_cache_size": 16, 00:22:09.909 "task_count": 2048, 00:22:09.909 "sequence_count": 2048, 00:22:09.909 "buf_count": 2048 00:22:09.909 } 00:22:09.909 } 00:22:09.909 ] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "bdev", 00:22:09.909 "config": [ 00:22:09.909 { 00:22:09.909 "method": "bdev_set_options", 00:22:09.909 "params": { 00:22:09.909 "bdev_io_pool_size": 65535, 00:22:09.909 "bdev_io_cache_size": 256, 00:22:09.909 "bdev_auto_examine": true, 00:22:09.909 "iobuf_small_cache_size": 128, 00:22:09.909 "iobuf_large_cache_size": 16 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_raid_set_options", 00:22:09.909 "params": { 00:22:09.909 "process_window_size_kb": 1024, 00:22:09.909 "process_max_bandwidth_mb_sec": 0 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_iscsi_set_options", 00:22:09.909 "params": { 00:22:09.909 "timeout_sec": 30 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_nvme_set_options", 00:22:09.909 "params": { 00:22:09.909 "action_on_timeout": "none", 00:22:09.909 "timeout_us": 0, 00:22:09.909 "timeout_admin_us": 0, 00:22:09.909 "keep_alive_timeout_ms": 10000, 00:22:09.909 "arbitration_burst": 0, 00:22:09.909 "low_priority_weight": 0, 00:22:09.909 "medium_priority_weight": 0, 00:22:09.909 "high_priority_weight": 0, 00:22:09.909 "nvme_adminq_poll_period_us": 10000, 00:22:09.909 "nvme_ioq_poll_period_us": 0, 00:22:09.909 "io_queue_requests": 0, 00:22:09.909 "delay_cmd_submit": true, 00:22:09.909 "transport_retry_count": 4, 00:22:09.909 "bdev_retry_count": 3, 00:22:09.909 "transport_ack_timeout": 0, 00:22:09.909 "ctrlr_loss_timeout_sec": 0, 00:22:09.909 "reconnect_delay_sec": 0, 00:22:09.909 "fast_io_fail_timeout_sec": 0, 00:22:09.909 "disable_auto_failback": false, 00:22:09.909 "generate_uuids": false, 00:22:09.909 "transport_tos": 0, 00:22:09.909 "nvme_error_stat": false, 00:22:09.909 "rdma_srq_size": 0, 00:22:09.909 "io_path_stat": false, 00:22:09.909 "allow_accel_sequence": false, 00:22:09.909 "rdma_max_cq_size": 0, 00:22:09.909 "rdma_cm_event_timeout_ms": 0, 00:22:09.909 "dhchap_digests": [ 00:22:09.909 "sha256", 00:22:09.909 "sha384", 00:22:09.909 "sha512" 00:22:09.909 ], 00:22:09.909 "dhchap_dhgroups": [ 00:22:09.909 "null", 00:22:09.909 "ffdhe2048", 00:22:09.909 "ffdhe3072", 00:22:09.909 "ffdhe4096", 00:22:09.909 "ffdhe6144", 00:22:09.909 "ffdhe8192" 00:22:09.909 ] 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_nvme_set_hotplug", 00:22:09.909 "params": { 00:22:09.909 "period_us": 100000, 00:22:09.909 "enable": false 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_malloc_create", 00:22:09.909 "params": { 00:22:09.909 "name": "malloc0", 00:22:09.909 "num_blocks": 8192, 00:22:09.909 "block_size": 4096, 00:22:09.909 "physical_block_size": 4096, 00:22:09.909 "uuid": "c60557c0-e7f2-4520-9a63-84921ea0d2dd", 00:22:09.909 "optimal_io_boundary": 0, 00:22:09.909 "md_size": 0, 00:22:09.909 "dif_type": 0, 00:22:09.909 "dif_is_head_of_md": false, 00:22:09.909 "dif_pi_format": 0 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "bdev_wait_for_examine" 00:22:09.909 } 00:22:09.909 ] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "nbd", 00:22:09.909 "config": [] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "scheduler", 00:22:09.909 "config": [ 00:22:09.909 { 00:22:09.909 "method": "framework_set_scheduler", 00:22:09.909 "params": { 00:22:09.909 "name": "static" 00:22:09.909 } 00:22:09.909 } 00:22:09.909 ] 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "subsystem": "nvmf", 00:22:09.909 "config": [ 00:22:09.909 { 00:22:09.909 "method": "nvmf_set_config", 00:22:09.909 "params": { 00:22:09.909 "discovery_filter": "match_any", 00:22:09.909 "admin_cmd_passthru": { 00:22:09.909 "identify_ctrlr": false 00:22:09.909 }, 00:22:09.909 "dhchap_digests": [ 00:22:09.909 "sha256", 00:22:09.909 "sha384", 00:22:09.909 "sha512" 00:22:09.909 ], 00:22:09.909 "dhchap_dhgroups": [ 00:22:09.909 "null", 00:22:09.909 "ffdhe2048", 00:22:09.909 "ffdhe3072", 00:22:09.909 "ffdhe4096", 00:22:09.909 "ffdhe6144", 00:22:09.909 "ffdhe8192" 00:22:09.909 ] 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "nvmf_set_max_subsystems", 00:22:09.909 "params": { 00:22:09.909 "max_subsystems": 1024 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "nvmf_set_crdt", 00:22:09.909 "params": { 00:22:09.909 "crdt1": 0, 00:22:09.909 "crdt2": 0, 00:22:09.909 "crdt3": 0 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "nvmf_create_transport", 00:22:09.909 "params": { 00:22:09.909 "trtype": "TCP", 00:22:09.909 "max_queue_depth": 128, 00:22:09.909 "max_io_qpairs_per_ctrlr": 127, 00:22:09.909 "in_capsule_data_size": 4096, 00:22:09.909 "max_io_size": 131072, 00:22:09.909 "io_unit_size": 131072, 00:22:09.909 "max_aq_depth": 128, 00:22:09.909 "num_shared_buffers": 511, 00:22:09.909 "buf_cache_size": 4294967295, 00:22:09.909 "dif_insert_or_strip": false, 00:22:09.909 "zcopy": false, 00:22:09.909 "c2h_success": false, 00:22:09.909 "sock_priority": 0, 00:22:09.909 "abort_timeout_sec": 1, 00:22:09.909 "ack_timeout": 0, 00:22:09.909 "data_wr_pool_size": 0 00:22:09.909 } 00:22:09.909 }, 00:22:09.909 { 00:22:09.909 "method": "nvmf_create_subsystem", 00:22:09.909 "params": { 00:22:09.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.909 "allow_any_host": false, 00:22:09.909 "serial_number": "SPDK00000000000001", 00:22:09.910 "model_number": "SPDK bdev Controller", 00:22:09.910 "max_namespaces": 10, 00:22:09.910 "min_cntlid": 1, 00:22:09.910 "max_cntlid": 65519, 00:22:09.910 "ana_reporting": false 00:22:09.910 } 00:22:09.910 }, 00:22:09.910 { 00:22:09.910 "method": "nvmf_subsystem_add_host", 00:22:09.910 "params": { 00:22:09.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.910 "host": "nqn.2016-06.io.spdk:host1", 00:22:09.910 "psk": "key0" 00:22:09.910 } 00:22:09.910 }, 00:22:09.910 { 00:22:09.910 "method": "nvmf_subsystem_add_ns", 00:22:09.910 "params": { 00:22:09.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.910 "namespace": { 00:22:09.910 "nsid": 1, 00:22:09.910 "bdev_name": "malloc0", 00:22:09.910 "nguid": "C60557C0E7F245209A6384921EA0D2DD", 00:22:09.910 "uuid": "c60557c0-e7f2-4520-9a63-84921ea0d2dd", 00:22:09.910 "no_auto_visible": false 00:22:09.910 } 00:22:09.910 } 00:22:09.910 }, 00:22:09.910 { 00:22:09.910 "method": "nvmf_subsystem_add_listener", 00:22:09.910 "params": { 00:22:09.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:09.910 "listen_address": { 00:22:09.910 "trtype": "TCP", 00:22:09.910 "adrfam": "IPv4", 00:22:09.910 "traddr": "10.0.0.2", 00:22:09.910 "trsvcid": "4420" 00:22:09.910 }, 00:22:09.910 "secure_channel": true 00:22:09.910 } 00:22:09.910 } 00:22:09.910 ] 00:22:09.910 } 00:22:09.910 ] 00:22:09.910 }' 00:22:09.910 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:10.192 "subsystems": [ 00:22:10.192 { 00:22:10.192 "subsystem": "keyring", 00:22:10.192 "config": [ 00:22:10.192 { 00:22:10.192 "method": "keyring_file_add_key", 00:22:10.192 "params": { 00:22:10.192 "name": "key0", 00:22:10.192 "path": "/tmp/tmp.p85alnbf81" 00:22:10.192 } 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "iobuf", 00:22:10.192 "config": [ 00:22:10.192 { 00:22:10.192 "method": "iobuf_set_options", 00:22:10.192 "params": { 00:22:10.192 "small_pool_count": 8192, 00:22:10.192 "large_pool_count": 1024, 00:22:10.192 "small_bufsize": 8192, 00:22:10.192 "large_bufsize": 135168 00:22:10.192 } 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "sock", 00:22:10.192 "config": [ 00:22:10.192 { 00:22:10.192 "method": "sock_set_default_impl", 00:22:10.192 "params": { 00:22:10.192 "impl_name": "posix" 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "sock_impl_set_options", 00:22:10.192 "params": { 00:22:10.192 "impl_name": "ssl", 00:22:10.192 "recv_buf_size": 4096, 00:22:10.192 "send_buf_size": 4096, 00:22:10.192 "enable_recv_pipe": true, 00:22:10.192 "enable_quickack": false, 00:22:10.192 "enable_placement_id": 0, 00:22:10.192 "enable_zerocopy_send_server": true, 00:22:10.192 "enable_zerocopy_send_client": false, 00:22:10.192 "zerocopy_threshold": 0, 00:22:10.192 "tls_version": 0, 00:22:10.192 "enable_ktls": false 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "sock_impl_set_options", 00:22:10.192 "params": { 00:22:10.192 "impl_name": "posix", 00:22:10.192 "recv_buf_size": 2097152, 00:22:10.192 "send_buf_size": 2097152, 00:22:10.192 "enable_recv_pipe": true, 00:22:10.192 "enable_quickack": false, 00:22:10.192 "enable_placement_id": 0, 00:22:10.192 "enable_zerocopy_send_server": true, 00:22:10.192 "enable_zerocopy_send_client": false, 00:22:10.192 "zerocopy_threshold": 0, 00:22:10.192 "tls_version": 0, 00:22:10.192 "enable_ktls": false 00:22:10.192 } 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "vmd", 00:22:10.192 "config": [] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "accel", 00:22:10.192 "config": [ 00:22:10.192 { 00:22:10.192 "method": "accel_set_options", 00:22:10.192 "params": { 00:22:10.192 "small_cache_size": 128, 00:22:10.192 "large_cache_size": 16, 00:22:10.192 "task_count": 2048, 00:22:10.192 "sequence_count": 2048, 00:22:10.192 "buf_count": 2048 00:22:10.192 } 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "bdev", 00:22:10.192 "config": [ 00:22:10.192 { 00:22:10.192 "method": "bdev_set_options", 00:22:10.192 "params": { 00:22:10.192 "bdev_io_pool_size": 65535, 00:22:10.192 "bdev_io_cache_size": 256, 00:22:10.192 "bdev_auto_examine": true, 00:22:10.192 "iobuf_small_cache_size": 128, 00:22:10.192 "iobuf_large_cache_size": 16 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_raid_set_options", 00:22:10.192 "params": { 00:22:10.192 "process_window_size_kb": 1024, 00:22:10.192 "process_max_bandwidth_mb_sec": 0 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_iscsi_set_options", 00:22:10.192 "params": { 00:22:10.192 "timeout_sec": 30 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_nvme_set_options", 00:22:10.192 "params": { 00:22:10.192 "action_on_timeout": "none", 00:22:10.192 "timeout_us": 0, 00:22:10.192 "timeout_admin_us": 0, 00:22:10.192 "keep_alive_timeout_ms": 10000, 00:22:10.192 "arbitration_burst": 0, 00:22:10.192 "low_priority_weight": 0, 00:22:10.192 "medium_priority_weight": 0, 00:22:10.192 "high_priority_weight": 0, 00:22:10.192 "nvme_adminq_poll_period_us": 10000, 00:22:10.192 "nvme_ioq_poll_period_us": 0, 00:22:10.192 "io_queue_requests": 512, 00:22:10.192 "delay_cmd_submit": true, 00:22:10.192 "transport_retry_count": 4, 00:22:10.192 "bdev_retry_count": 3, 00:22:10.192 "transport_ack_timeout": 0, 00:22:10.192 "ctrlr_loss_timeout_sec": 0, 00:22:10.192 "reconnect_delay_sec": 0, 00:22:10.192 "fast_io_fail_timeout_sec": 0, 00:22:10.192 "disable_auto_failback": false, 00:22:10.192 "generate_uuids": false, 00:22:10.192 "transport_tos": 0, 00:22:10.192 "nvme_error_stat": false, 00:22:10.192 "rdma_srq_size": 0, 00:22:10.192 "io_path_stat": false, 00:22:10.192 "allow_accel_sequence": false, 00:22:10.192 "rdma_max_cq_size": 0, 00:22:10.192 "rdma_cm_event_timeout_ms": 0, 00:22:10.192 "dhchap_digests": [ 00:22:10.192 "sha256", 00:22:10.192 "sha384", 00:22:10.192 "sha512" 00:22:10.192 ], 00:22:10.192 "dhchap_dhgroups": [ 00:22:10.192 "null", 00:22:10.192 "ffdhe2048", 00:22:10.192 "ffdhe3072", 00:22:10.192 "ffdhe4096", 00:22:10.192 "ffdhe6144", 00:22:10.192 "ffdhe8192" 00:22:10.192 ] 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_nvme_attach_controller", 00:22:10.192 "params": { 00:22:10.192 "name": "TLSTEST", 00:22:10.192 "trtype": "TCP", 00:22:10.192 "adrfam": "IPv4", 00:22:10.192 "traddr": "10.0.0.2", 00:22:10.192 "trsvcid": "4420", 00:22:10.192 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.192 "prchk_reftag": false, 00:22:10.192 "prchk_guard": false, 00:22:10.192 "ctrlr_loss_timeout_sec": 0, 00:22:10.192 "reconnect_delay_sec": 0, 00:22:10.192 "fast_io_fail_timeout_sec": 0, 00:22:10.192 "psk": "key0", 00:22:10.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:10.192 "hdgst": false, 00:22:10.192 "ddgst": false, 00:22:10.192 "multipath": "multipath" 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_nvme_set_hotplug", 00:22:10.192 "params": { 00:22:10.192 "period_us": 100000, 00:22:10.192 "enable": false 00:22:10.192 } 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "method": "bdev_wait_for_examine" 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }, 00:22:10.192 { 00:22:10.192 "subsystem": "nbd", 00:22:10.192 "config": [] 00:22:10.192 } 00:22:10.192 ] 00:22:10.192 }' 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1828401 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1828401 ']' 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1828401 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1828401 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1828401' 00:22:10.192 killing process with pid 1828401 00:22:10.192 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1828401 00:22:10.192 Received shutdown signal, test time was about 10.000000 seconds 00:22:10.192 00:22:10.192 Latency(us) 00:22:10.192 [2024-10-07T11:32:51.904Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.192 [2024-10-07T11:32:51.904Z] =================================================================================================================== 00:22:10.192 [2024-10-07T11:32:51.905Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:10.193 13:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1828401 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1828127 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1828127 ']' 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1828127 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1828127 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1828127' 00:22:10.457 killing process with pid 1828127 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1828127 00:22:10.457 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1828127 00:22:10.716 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:10.716 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:10.716 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:10.716 "subsystems": [ 00:22:10.716 { 00:22:10.716 "subsystem": "keyring", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "keyring_file_add_key", 00:22:10.716 "params": { 00:22:10.716 "name": "key0", 00:22:10.716 "path": "/tmp/tmp.p85alnbf81" 00:22:10.716 } 00:22:10.716 } 00:22:10.716 ] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "iobuf", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "iobuf_set_options", 00:22:10.716 "params": { 00:22:10.716 "small_pool_count": 8192, 00:22:10.716 "large_pool_count": 1024, 00:22:10.716 "small_bufsize": 8192, 00:22:10.716 "large_bufsize": 135168 00:22:10.716 } 00:22:10.716 } 00:22:10.716 ] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "sock", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "sock_set_default_impl", 00:22:10.716 "params": { 00:22:10.716 "impl_name": "posix" 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "sock_impl_set_options", 00:22:10.716 "params": { 00:22:10.716 "impl_name": "ssl", 00:22:10.716 "recv_buf_size": 4096, 00:22:10.716 "send_buf_size": 4096, 00:22:10.716 "enable_recv_pipe": true, 00:22:10.716 "enable_quickack": false, 00:22:10.716 "enable_placement_id": 0, 00:22:10.716 "enable_zerocopy_send_server": true, 00:22:10.716 "enable_zerocopy_send_client": false, 00:22:10.716 "zerocopy_threshold": 0, 00:22:10.716 "tls_version": 0, 00:22:10.716 "enable_ktls": false 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "sock_impl_set_options", 00:22:10.716 "params": { 00:22:10.716 "impl_name": "posix", 00:22:10.716 "recv_buf_size": 2097152, 00:22:10.716 "send_buf_size": 2097152, 00:22:10.716 "enable_recv_pipe": true, 00:22:10.716 "enable_quickack": false, 00:22:10.716 "enable_placement_id": 0, 00:22:10.716 "enable_zerocopy_send_server": true, 00:22:10.716 "enable_zerocopy_send_client": false, 00:22:10.716 "zerocopy_threshold": 0, 00:22:10.716 "tls_version": 0, 00:22:10.716 "enable_ktls": false 00:22:10.716 } 00:22:10.716 } 00:22:10.716 ] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "vmd", 00:22:10.716 "config": [] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "accel", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "accel_set_options", 00:22:10.716 "params": { 00:22:10.716 "small_cache_size": 128, 00:22:10.716 "large_cache_size": 16, 00:22:10.716 "task_count": 2048, 00:22:10.716 "sequence_count": 2048, 00:22:10.716 "buf_count": 2048 00:22:10.716 } 00:22:10.716 } 00:22:10.716 ] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "bdev", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "bdev_set_options", 00:22:10.716 "params": { 00:22:10.716 "bdev_io_pool_size": 65535, 00:22:10.716 "bdev_io_cache_size": 256, 00:22:10.716 "bdev_auto_examine": true, 00:22:10.716 "iobuf_small_cache_size": 128, 00:22:10.716 "iobuf_large_cache_size": 16 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_raid_set_options", 00:22:10.716 "params": { 00:22:10.716 "process_window_size_kb": 1024, 00:22:10.716 "process_max_bandwidth_mb_sec": 0 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_iscsi_set_options", 00:22:10.716 "params": { 00:22:10.716 "timeout_sec": 30 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_nvme_set_options", 00:22:10.716 "params": { 00:22:10.716 "action_on_timeout": "none", 00:22:10.716 "timeout_us": 0, 00:22:10.716 "timeout_admin_us": 0, 00:22:10.716 "keep_alive_timeout_ms": 10000, 00:22:10.716 "arbitration_burst": 0, 00:22:10.716 "low_priority_weight": 0, 00:22:10.716 "medium_priority_weight": 0, 00:22:10.716 "high_priority_weight": 0, 00:22:10.716 "nvme_adminq_poll_period_us": 10000, 00:22:10.716 "nvme_ioq_poll_period_us": 0, 00:22:10.716 "io_queue_requests": 0, 00:22:10.716 "delay_cmd_submit": true, 00:22:10.716 "transport_retry_count": 4, 00:22:10.716 "bdev_retry_count": 3, 00:22:10.716 "transport_ack_timeout": 0, 00:22:10.716 "ctrlr_loss_timeout_sec": 0, 00:22:10.716 "reconnect_delay_sec": 0, 00:22:10.716 "fast_io_fail_timeout_sec": 0, 00:22:10.716 "disable_auto_failback": false, 00:22:10.716 "generate_uuids": false, 00:22:10.716 "transport_tos": 0, 00:22:10.716 "nvme_error_stat": false, 00:22:10.716 "rdma_srq_size": 0, 00:22:10.716 "io_path_stat": false, 00:22:10.716 "allow_accel_sequence": false, 00:22:10.716 "rdma_max_cq_size": 0, 00:22:10.716 "rdma_cm_event_timeout_ms": 0, 00:22:10.716 "dhchap_digests": [ 00:22:10.716 "sha256", 00:22:10.716 "sha384", 00:22:10.716 "sha512" 00:22:10.716 ], 00:22:10.716 "dhchap_dhgroups": [ 00:22:10.716 "null", 00:22:10.716 "ffdhe2048", 00:22:10.716 "ffdhe3072", 00:22:10.716 "ffdhe4096", 00:22:10.716 "ffdhe6144", 00:22:10.716 "ffdhe8192" 00:22:10.716 ] 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_nvme_set_hotplug", 00:22:10.716 "params": { 00:22:10.716 "period_us": 100000, 00:22:10.716 "enable": false 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_malloc_create", 00:22:10.716 "params": { 00:22:10.716 "name": "malloc0", 00:22:10.716 "num_blocks": 8192, 00:22:10.716 "block_size": 4096, 00:22:10.716 "physical_block_size": 4096, 00:22:10.716 "uuid": "c60557c0-e7f2-4520-9a63-84921ea0d2dd", 00:22:10.716 "optimal_io_boundary": 0, 00:22:10.716 "md_size": 0, 00:22:10.716 "dif_type": 0, 00:22:10.716 "dif_is_head_of_md": false, 00:22:10.716 "dif_pi_format": 0 00:22:10.716 } 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "method": "bdev_wait_for_examine" 00:22:10.716 } 00:22:10.716 ] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "nbd", 00:22:10.716 "config": [] 00:22:10.716 }, 00:22:10.716 { 00:22:10.716 "subsystem": "scheduler", 00:22:10.716 "config": [ 00:22:10.716 { 00:22:10.716 "method": "framework_set_scheduler", 00:22:10.717 "params": { 00:22:10.717 "name": "static" 00:22:10.717 } 00:22:10.717 } 00:22:10.717 ] 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "subsystem": "nvmf", 00:22:10.717 "config": [ 00:22:10.717 { 00:22:10.717 "method": "nvmf_set_config", 00:22:10.717 "params": { 00:22:10.717 "discovery_filter": "match_any", 00:22:10.717 "admin_cmd_passthru": { 00:22:10.717 "identify_ctrlr": false 00:22:10.717 }, 00:22:10.717 "dhchap_digests": [ 00:22:10.717 "sha256", 00:22:10.717 "sha384", 00:22:10.717 "sha512" 00:22:10.717 ], 00:22:10.717 "dhchap_dhgroups": [ 00:22:10.717 "null", 00:22:10.717 "ffdhe2048", 00:22:10.717 "ffdhe3072", 00:22:10.717 "ffdhe4096", 00:22:10.717 "ffdhe6144", 00:22:10.717 "ffdhe8192" 00:22:10.717 ] 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_set_max_subsystems", 00:22:10.717 "params": { 00:22:10.717 "max_subsystems": 1024 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_set_crdt", 00:22:10.717 "params": { 00:22:10.717 "crdt1": 0, 00:22:10.717 "crdt2": 0, 00:22:10.717 "crdt3": 0 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_create_transport", 00:22:10.717 "params": { 00:22:10.717 "trtype": "TCP", 00:22:10.717 "max_queue_depth": 128, 00:22:10.717 "max_io_qpairs_per_ctrlr": 127, 00:22:10.717 "in_capsule_data_size": 4096, 00:22:10.717 "max_io_size": 131072, 00:22:10.717 "io_unit_size": 131072, 00:22:10.717 "max_aq_depth": 128, 00:22:10.717 "num_shared_buffers": 511, 00:22:10.717 "buf_cache_size": 4294967295, 00:22:10.717 "dif_insert_or_strip": false, 00:22:10.717 "zcopy": false, 00:22:10.717 "c2h_success": false, 00:22:10.717 "sock_priority": 0, 00:22:10.717 "abort_timeout_sec": 1, 00:22:10.717 "ack_timeout": 0, 00:22:10.717 "data_wr_pool_size": 0 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_create_subsystem", 00:22:10.717 "params": { 00:22:10.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.717 "allow_any_host": false, 00:22:10.717 "serial_number": "SPDK00000000000001", 00:22:10.717 "model_number": "SPDK bdev Controller", 00:22:10.717 "max_namespaces": 10, 00:22:10.717 "min_cntlid": 1, 00:22:10.717 "max_cntlid": 65519, 00:22:10.717 "ana_reporting": false 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_subsystem_add_host", 00:22:10.717 "params": { 00:22:10.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.717 "host": "nqn.2016-06.io.spdk:host1", 00:22:10.717 "psk": "key0" 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_subsystem_add_ns", 00:22:10.717 "params": { 00:22:10.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.717 "namespace": { 00:22:10.717 "nsid": 1, 00:22:10.717 "bdev_name": "malloc0", 00:22:10.717 "nguid": "C60557C0E7F245209A6384921EA0D2DD", 00:22:10.717 "uuid": "c60557c0-e7f2-4520-9a63-84921ea0d2dd", 00:22:10.717 "no_auto_visible": false 00:22:10.717 } 00:22:10.717 } 00:22:10.717 }, 00:22:10.717 { 00:22:10.717 "method": "nvmf_subsystem_add_listener", 00:22:10.717 "params": { 00:22:10.717 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.717 "listen_address": { 00:22:10.717 "trtype": "TCP", 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:10.717 "adrfam": "IPv4", 00:22:10.717 "traddr": "10.0.0.2", 00:22:10.717 "trsvcid": "4420" 00:22:10.717 }, 00:22:10.717 "secure_channel": true 00:22:10.717 } 00:22:10.717 } 00:22:10.717 ] 00:22:10.717 } 00:22:10.717 ] 00:22:10.717 }' 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1828678 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1828678 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1828678 ']' 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:10.717 13:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.975 [2024-10-07 13:32:52.466419] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:10.975 [2024-10-07 13:32:52.466508] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.975 [2024-10-07 13:32:52.527323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.975 [2024-10-07 13:32:52.630960] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.975 [2024-10-07 13:32:52.631014] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.975 [2024-10-07 13:32:52.631038] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.975 [2024-10-07 13:32:52.631049] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.976 [2024-10-07 13:32:52.631058] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.976 [2024-10-07 13:32:52.631607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.233 [2024-10-07 13:32:52.880847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.233 [2024-10-07 13:32:52.912841] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.233 [2024-10-07 13:32:52.913107] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1828822 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1828822 /var/tmp/bdevperf.sock 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1828822 ']' 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:11.799 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:11.799 "subsystems": [ 00:22:11.799 { 00:22:11.799 "subsystem": "keyring", 00:22:11.799 "config": [ 00:22:11.799 { 00:22:11.799 "method": "keyring_file_add_key", 00:22:11.799 "params": { 00:22:11.799 "name": "key0", 00:22:11.799 "path": "/tmp/tmp.p85alnbf81" 00:22:11.799 } 00:22:11.799 } 00:22:11.799 ] 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "subsystem": "iobuf", 00:22:11.799 "config": [ 00:22:11.799 { 00:22:11.799 "method": "iobuf_set_options", 00:22:11.799 "params": { 00:22:11.799 "small_pool_count": 8192, 00:22:11.799 "large_pool_count": 1024, 00:22:11.799 "small_bufsize": 8192, 00:22:11.799 "large_bufsize": 135168 00:22:11.799 } 00:22:11.799 } 00:22:11.799 ] 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "subsystem": "sock", 00:22:11.799 "config": [ 00:22:11.799 { 00:22:11.799 "method": "sock_set_default_impl", 00:22:11.799 "params": { 00:22:11.799 "impl_name": "posix" 00:22:11.799 } 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "method": "sock_impl_set_options", 00:22:11.799 "params": { 00:22:11.799 "impl_name": "ssl", 00:22:11.799 "recv_buf_size": 4096, 00:22:11.799 "send_buf_size": 4096, 00:22:11.799 "enable_recv_pipe": true, 00:22:11.799 "enable_quickack": false, 00:22:11.799 "enable_placement_id": 0, 00:22:11.799 "enable_zerocopy_send_server": true, 00:22:11.799 "enable_zerocopy_send_client": false, 00:22:11.799 "zerocopy_threshold": 0, 00:22:11.799 "tls_version": 0, 00:22:11.799 "enable_ktls": false 00:22:11.799 } 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "method": "sock_impl_set_options", 00:22:11.799 "params": { 00:22:11.799 "impl_name": "posix", 00:22:11.799 "recv_buf_size": 2097152, 00:22:11.799 "send_buf_size": 2097152, 00:22:11.799 "enable_recv_pipe": true, 00:22:11.799 "enable_quickack": false, 00:22:11.799 "enable_placement_id": 0, 00:22:11.799 "enable_zerocopy_send_server": true, 00:22:11.799 "enable_zerocopy_send_client": false, 00:22:11.799 "zerocopy_threshold": 0, 00:22:11.799 "tls_version": 0, 00:22:11.799 "enable_ktls": false 00:22:11.799 } 00:22:11.799 } 00:22:11.799 ] 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "subsystem": "vmd", 00:22:11.799 "config": [] 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "subsystem": "accel", 00:22:11.799 "config": [ 00:22:11.799 { 00:22:11.799 "method": "accel_set_options", 00:22:11.799 "params": { 00:22:11.799 "small_cache_size": 128, 00:22:11.799 "large_cache_size": 16, 00:22:11.799 "task_count": 2048, 00:22:11.799 "sequence_count": 2048, 00:22:11.799 "buf_count": 2048 00:22:11.799 } 00:22:11.799 } 00:22:11.799 ] 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "subsystem": "bdev", 00:22:11.799 "config": [ 00:22:11.799 { 00:22:11.799 "method": "bdev_set_options", 00:22:11.799 "params": { 00:22:11.799 "bdev_io_pool_size": 65535, 00:22:11.799 "bdev_io_cache_size": 256, 00:22:11.799 "bdev_auto_examine": true, 00:22:11.799 "iobuf_small_cache_size": 128, 00:22:11.799 "iobuf_large_cache_size": 16 00:22:11.799 } 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "method": "bdev_raid_set_options", 00:22:11.799 "params": { 00:22:11.799 "process_window_size_kb": 1024, 00:22:11.799 "process_max_bandwidth_mb_sec": 0 00:22:11.799 } 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "method": "bdev_iscsi_set_options", 00:22:11.799 "params": { 00:22:11.799 "timeout_sec": 30 00:22:11.799 } 00:22:11.799 }, 00:22:11.799 { 00:22:11.799 "method": "bdev_nvme_set_options", 00:22:11.799 "params": { 00:22:11.799 "action_on_timeout": "none", 00:22:11.799 "timeout_us": 0, 00:22:11.799 "timeout_admin_us": 0, 00:22:11.799 "keep_alive_timeout_ms": 10000, 00:22:11.799 "arbitration_burst": 0, 00:22:11.799 "low_priority_weight": 0, 00:22:11.799 "medium_priority_weight": 0, 00:22:11.799 "high_priority_weight": 0, 00:22:11.799 "nvme_adminq_poll_period_us": 10000, 00:22:11.799 "nvme_ioq_poll_period_us": 0, 00:22:11.799 "io_queue_requests": 512, 00:22:11.799 "delay_cmd_submit": true, 00:22:11.799 "transport_retry_count": 4, 00:22:11.799 "bdev_retry_count": 3, 00:22:11.799 "transport_ack_timeout": 0, 00:22:11.799 "ctrlr_loss_timeout_sec": 0, 00:22:11.799 "reconnect_delay_sec": 0, 00:22:11.799 "fast_io_fail_timeout_sec": 0, 00:22:11.799 "disable_auto_failback": false, 00:22:11.799 "generate_uuids": false, 00:22:11.799 "transport_tos": 0, 00:22:11.799 "nvme_error_stat": false, 00:22:11.799 "rdma_srq_size": 0, 00:22:11.799 "io_path_stat": false, 00:22:11.799 "allow_accel_sequence": false, 00:22:11.799 "rdma_max_cq_size": 0, 00:22:11.799 "rdma_cm_event_timeout_ms": 0, 00:22:11.799 "dhchap_digests": [ 00:22:11.799 "sha256", 00:22:11.799 "sha384", 00:22:11.799 "sha512" 00:22:11.799 ], 00:22:11.799 "dhchap_dhgroups": [ 00:22:11.799 "null", 00:22:11.799 "ffdhe2048", 00:22:11.799 "ffdhe3072", 00:22:11.800 "ffdhe4096", 00:22:11.800 "ffdhe6144", 00:22:11.800 "ffdhe8192" 00:22:11.800 ] 00:22:11.800 } 00:22:11.800 }, 00:22:11.800 { 00:22:11.800 "method": "bdev_nvme_attach_controller", 00:22:11.800 "params": { 00:22:11.800 "name": "TLSTEST", 00:22:11.800 "trtype": "TCP", 00:22:11.800 "adrfam": "IPv4", 00:22:11.800 "traddr": "10.0.0.2", 00:22:11.800 "trsvcid": "4420", 00:22:11.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:11.800 "prchk_reftag": false, 00:22:11.800 "prchk_guard": false, 00:22:11.800 "ctrlr_loss_timeout_sec": 0, 00:22:11.800 "reconnect_delay_sec": 0, 00:22:11.800 "fast_io_fail_timeout_sec": 0, 00:22:11.800 "psk": "key0", 00:22:11.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:11.800 "hdgst": false, 00:22:11.800 "ddgst": false, 00:22:11.800 "multipath": "multipath" 00:22:11.800 } 00:22:11.800 }, 00:22:11.800 { 00:22:11.800 "method": "bdev_nvme_set_hotplug", 00:22:11.800 "params": { 00:22:11.800 "period_us": 100000, 00:22:11.800 "enable": false 00:22:11.800 } 00:22:11.800 }, 00:22:11.800 { 00:22:11.800 "method": "bdev_wait_for_examine" 00:22:11.800 } 00:22:11.800 ] 00:22:11.800 }, 00:22:11.800 { 00:22:11.800 "subsystem": "nbd", 00:22:11.800 "config": [] 00:22:11.800 } 00:22:11.800 ] 00:22:11.800 }' 00:22:11.800 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:11.800 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.800 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:11.800 13:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:12.057 [2024-10-07 13:32:53.547520] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:12.057 [2024-10-07 13:32:53.547606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828822 ] 00:22:12.057 [2024-10-07 13:32:53.602397] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.057 [2024-10-07 13:32:53.707854] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.315 [2024-10-07 13:32:53.888469] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.879 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:12.879 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:12.879 13:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:13.136 Running I/O for 10 seconds... 00:22:14.999 3348.00 IOPS, 13.08 MiB/s [2024-10-07T11:32:58.082Z] 3439.00 IOPS, 13.43 MiB/s [2024-10-07T11:32:59.012Z] 3449.00 IOPS, 13.47 MiB/s [2024-10-07T11:32:59.943Z] 3434.00 IOPS, 13.41 MiB/s [2024-10-07T11:33:00.876Z] 3413.40 IOPS, 13.33 MiB/s [2024-10-07T11:33:01.808Z] 3408.50 IOPS, 13.31 MiB/s [2024-10-07T11:33:02.739Z] 3403.86 IOPS, 13.30 MiB/s [2024-10-07T11:33:04.110Z] 3411.00 IOPS, 13.32 MiB/s [2024-10-07T11:33:05.043Z] 3416.78 IOPS, 13.35 MiB/s [2024-10-07T11:33:05.043Z] 3416.40 IOPS, 13.35 MiB/s 00:22:23.331 Latency(us) 00:22:23.331 [2024-10-07T11:33:05.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.331 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:23.331 Verification LBA range: start 0x0 length 0x2000 00:22:23.331 TLSTESTn1 : 10.02 3421.74 13.37 0.00 0.00 37343.15 8980.86 35535.08 00:22:23.331 [2024-10-07T11:33:05.043Z] =================================================================================================================== 00:22:23.331 [2024-10-07T11:33:05.043Z] Total : 3421.74 13.37 0.00 0.00 37343.15 8980.86 35535.08 00:22:23.331 { 00:22:23.331 "results": [ 00:22:23.331 { 00:22:23.331 "job": "TLSTESTn1", 00:22:23.331 "core_mask": "0x4", 00:22:23.331 "workload": "verify", 00:22:23.331 "status": "finished", 00:22:23.331 "verify_range": { 00:22:23.331 "start": 0, 00:22:23.331 "length": 8192 00:22:23.331 }, 00:22:23.331 "queue_depth": 128, 00:22:23.331 "io_size": 4096, 00:22:23.331 "runtime": 10.021503, 00:22:23.331 "iops": 3421.742227687803, 00:22:23.331 "mibps": 13.36618057690548, 00:22:23.331 "io_failed": 0, 00:22:23.331 "io_timeout": 0, 00:22:23.331 "avg_latency_us": 37343.153301298145, 00:22:23.331 "min_latency_us": 8980.85925925926, 00:22:23.331 "max_latency_us": 35535.07555555556 00:22:23.331 } 00:22:23.331 ], 00:22:23.331 "core_count": 1 00:22:23.331 } 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1828822 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1828822 ']' 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1828822 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1828822 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1828822' 00:22:23.331 killing process with pid 1828822 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1828822 00:22:23.331 Received shutdown signal, test time was about 10.000000 seconds 00:22:23.331 00:22:23.331 Latency(us) 00:22:23.331 [2024-10-07T11:33:05.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.331 [2024-10-07T11:33:05.043Z] =================================================================================================================== 00:22:23.331 [2024-10-07T11:33:05.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.331 13:33:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1828822 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1828678 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1828678 ']' 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1828678 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1828678 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1828678' 00:22:23.598 killing process with pid 1828678 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1828678 00:22:23.598 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1828678 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1830202 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1830202 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1830202 ']' 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:23.856 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.856 [2024-10-07 13:33:05.422097] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:23.856 [2024-10-07 13:33:05.422191] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.856 [2024-10-07 13:33:05.484583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.113 [2024-10-07 13:33:05.596827] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:24.114 [2024-10-07 13:33:05.596886] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:24.114 [2024-10-07 13:33:05.596912] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:24.114 [2024-10-07 13:33:05.596923] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:24.114 [2024-10-07 13:33:05.596934] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:24.114 [2024-10-07 13:33:05.597499] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.p85alnbf81 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.p85alnbf81 00:22:24.114 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:24.371 [2024-10-07 13:33:05.976118] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.371 13:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:24.628 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:24.885 [2024-10-07 13:33:06.521559] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.885 [2024-10-07 13:33:06.521851] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.885 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:25.143 malloc0 00:22:25.143 13:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:25.400 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:25.657 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1830639 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1830639 /var/tmp/bdevperf.sock 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1830639 ']' 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:26.223 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.223 [2024-10-07 13:33:07.707256] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:26.223 [2024-10-07 13:33:07.707347] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830639 ] 00:22:26.223 [2024-10-07 13:33:07.768760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:26.223 [2024-10-07 13:33:07.880352] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.481 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:26.481 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:26.481 13:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:26.739 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:26.996 [2024-10-07 13:33:08.522949] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.996 nvme0n1 00:22:26.996 13:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:27.254 Running I/O for 1 seconds... 00:22:28.199 3248.00 IOPS, 12.69 MiB/s 00:22:28.199 Latency(us) 00:22:28.199 [2024-10-07T11:33:09.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.199 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:28.199 Verification LBA range: start 0x0 length 0x2000 00:22:28.199 nvme0n1 : 1.02 3323.42 12.98 0.00 0.00 38241.45 5849.69 37671.06 00:22:28.199 [2024-10-07T11:33:09.911Z] =================================================================================================================== 00:22:28.199 [2024-10-07T11:33:09.911Z] Total : 3323.42 12.98 0.00 0.00 38241.45 5849.69 37671.06 00:22:28.199 { 00:22:28.199 "results": [ 00:22:28.199 { 00:22:28.199 "job": "nvme0n1", 00:22:28.199 "core_mask": "0x2", 00:22:28.199 "workload": "verify", 00:22:28.199 "status": "finished", 00:22:28.199 "verify_range": { 00:22:28.199 "start": 0, 00:22:28.199 "length": 8192 00:22:28.199 }, 00:22:28.199 "queue_depth": 128, 00:22:28.199 "io_size": 4096, 00:22:28.199 "runtime": 1.015822, 00:22:28.199 "iops": 3323.4168978423386, 00:22:28.199 "mibps": 12.982097257196635, 00:22:28.199 "io_failed": 0, 00:22:28.199 "io_timeout": 0, 00:22:28.199 "avg_latency_us": 38241.44806038266, 00:22:28.199 "min_latency_us": 5849.694814814815, 00:22:28.199 "max_latency_us": 37671.0637037037 00:22:28.199 } 00:22:28.199 ], 00:22:28.199 "core_count": 1 00:22:28.199 } 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1830639 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1830639 ']' 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1830639 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1830639 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1830639' 00:22:28.199 killing process with pid 1830639 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1830639 00:22:28.199 Received shutdown signal, test time was about 1.000000 seconds 00:22:28.199 00:22:28.199 Latency(us) 00:22:28.199 [2024-10-07T11:33:09.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.199 [2024-10-07T11:33:09.911Z] =================================================================================================================== 00:22:28.199 [2024-10-07T11:33:09.911Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.199 13:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1830639 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1830202 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1830202 ']' 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1830202 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1830202 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1830202' 00:22:28.456 killing process with pid 1830202 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1830202 00:22:28.456 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1830202 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1831363 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1831363 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1831363 ']' 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:28.716 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.975 [2024-10-07 13:33:10.450848] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:28.975 [2024-10-07 13:33:10.450950] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.975 [2024-10-07 13:33:10.516760] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.975 [2024-10-07 13:33:10.619457] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.975 [2024-10-07 13:33:10.619519] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.975 [2024-10-07 13:33:10.619542] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.975 [2024-10-07 13:33:10.619553] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.975 [2024-10-07 13:33:10.619562] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.975 [2024-10-07 13:33:10.620118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.233 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.233 [2024-10-07 13:33:10.750870] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:29.233 malloc0 00:22:29.234 [2024-10-07 13:33:10.798037] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.234 [2024-10-07 13:33:10.798295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1831394 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1831394 /var/tmp/bdevperf.sock 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1831394 ']' 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:29.234 13:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.234 [2024-10-07 13:33:10.869156] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:29.234 [2024-10-07 13:33:10.869218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831394 ] 00:22:29.234 [2024-10-07 13:33:10.924532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.491 [2024-10-07 13:33:11.031955] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.491 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:29.491 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:29.491 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.p85alnbf81 00:22:29.749 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.007 [2024-10-07 13:33:11.639157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:30.007 nvme0n1 00:22:30.264 13:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:30.264 Running I/O for 1 seconds... 00:22:31.197 3337.00 IOPS, 13.04 MiB/s 00:22:31.197 Latency(us) 00:22:31.197 [2024-10-07T11:33:12.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.197 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:31.197 Verification LBA range: start 0x0 length 0x2000 00:22:31.197 nvme0n1 : 1.02 3388.57 13.24 0.00 0.00 37409.92 7815.77 28932.93 00:22:31.197 [2024-10-07T11:33:12.909Z] =================================================================================================================== 00:22:31.197 [2024-10-07T11:33:12.909Z] Total : 3388.57 13.24 0.00 0.00 37409.92 7815.77 28932.93 00:22:31.197 { 00:22:31.197 "results": [ 00:22:31.197 { 00:22:31.197 "job": "nvme0n1", 00:22:31.197 "core_mask": "0x2", 00:22:31.197 "workload": "verify", 00:22:31.197 "status": "finished", 00:22:31.197 "verify_range": { 00:22:31.197 "start": 0, 00:22:31.197 "length": 8192 00:22:31.197 }, 00:22:31.197 "queue_depth": 128, 00:22:31.197 "io_size": 4096, 00:22:31.197 "runtime": 1.022554, 00:22:31.197 "iops": 3388.574099754145, 00:22:31.197 "mibps": 13.23661757716463, 00:22:31.197 "io_failed": 0, 00:22:31.197 "io_timeout": 0, 00:22:31.197 "avg_latency_us": 37409.921429747206, 00:22:31.197 "min_latency_us": 7815.774814814815, 00:22:31.197 "max_latency_us": 28932.93037037037 00:22:31.197 } 00:22:31.197 ], 00:22:31.197 "core_count": 1 00:22:31.197 } 00:22:31.197 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:31.197 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.197 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.455 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.455 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:31.455 "subsystems": [ 00:22:31.455 { 00:22:31.455 "subsystem": "keyring", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "keyring_file_add_key", 00:22:31.455 "params": { 00:22:31.455 "name": "key0", 00:22:31.455 "path": "/tmp/tmp.p85alnbf81" 00:22:31.455 } 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "iobuf", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "iobuf_set_options", 00:22:31.455 "params": { 00:22:31.455 "small_pool_count": 8192, 00:22:31.455 "large_pool_count": 1024, 00:22:31.455 "small_bufsize": 8192, 00:22:31.455 "large_bufsize": 135168 00:22:31.455 } 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "sock", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "sock_set_default_impl", 00:22:31.455 "params": { 00:22:31.455 "impl_name": "posix" 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "sock_impl_set_options", 00:22:31.455 "params": { 00:22:31.455 "impl_name": "ssl", 00:22:31.455 "recv_buf_size": 4096, 00:22:31.455 "send_buf_size": 4096, 00:22:31.455 "enable_recv_pipe": true, 00:22:31.455 "enable_quickack": false, 00:22:31.455 "enable_placement_id": 0, 00:22:31.455 "enable_zerocopy_send_server": true, 00:22:31.455 "enable_zerocopy_send_client": false, 00:22:31.455 "zerocopy_threshold": 0, 00:22:31.455 "tls_version": 0, 00:22:31.455 "enable_ktls": false 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "sock_impl_set_options", 00:22:31.455 "params": { 00:22:31.455 "impl_name": "posix", 00:22:31.455 "recv_buf_size": 2097152, 00:22:31.455 "send_buf_size": 2097152, 00:22:31.455 "enable_recv_pipe": true, 00:22:31.455 "enable_quickack": false, 00:22:31.455 "enable_placement_id": 0, 00:22:31.455 "enable_zerocopy_send_server": true, 00:22:31.455 "enable_zerocopy_send_client": false, 00:22:31.455 "zerocopy_threshold": 0, 00:22:31.455 "tls_version": 0, 00:22:31.455 "enable_ktls": false 00:22:31.455 } 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "vmd", 00:22:31.455 "config": [] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "accel", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "accel_set_options", 00:22:31.455 "params": { 00:22:31.455 "small_cache_size": 128, 00:22:31.455 "large_cache_size": 16, 00:22:31.455 "task_count": 2048, 00:22:31.455 "sequence_count": 2048, 00:22:31.455 "buf_count": 2048 00:22:31.455 } 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "bdev", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "bdev_set_options", 00:22:31.455 "params": { 00:22:31.455 "bdev_io_pool_size": 65535, 00:22:31.455 "bdev_io_cache_size": 256, 00:22:31.455 "bdev_auto_examine": true, 00:22:31.455 "iobuf_small_cache_size": 128, 00:22:31.455 "iobuf_large_cache_size": 16 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_raid_set_options", 00:22:31.455 "params": { 00:22:31.455 "process_window_size_kb": 1024, 00:22:31.455 "process_max_bandwidth_mb_sec": 0 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_iscsi_set_options", 00:22:31.455 "params": { 00:22:31.455 "timeout_sec": 30 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_nvme_set_options", 00:22:31.455 "params": { 00:22:31.455 "action_on_timeout": "none", 00:22:31.455 "timeout_us": 0, 00:22:31.455 "timeout_admin_us": 0, 00:22:31.455 "keep_alive_timeout_ms": 10000, 00:22:31.455 "arbitration_burst": 0, 00:22:31.455 "low_priority_weight": 0, 00:22:31.455 "medium_priority_weight": 0, 00:22:31.455 "high_priority_weight": 0, 00:22:31.455 "nvme_adminq_poll_period_us": 10000, 00:22:31.455 "nvme_ioq_poll_period_us": 0, 00:22:31.455 "io_queue_requests": 0, 00:22:31.455 "delay_cmd_submit": true, 00:22:31.455 "transport_retry_count": 4, 00:22:31.455 "bdev_retry_count": 3, 00:22:31.455 "transport_ack_timeout": 0, 00:22:31.455 "ctrlr_loss_timeout_sec": 0, 00:22:31.455 "reconnect_delay_sec": 0, 00:22:31.455 "fast_io_fail_timeout_sec": 0, 00:22:31.455 "disable_auto_failback": false, 00:22:31.455 "generate_uuids": false, 00:22:31.455 "transport_tos": 0, 00:22:31.455 "nvme_error_stat": false, 00:22:31.455 "rdma_srq_size": 0, 00:22:31.455 "io_path_stat": false, 00:22:31.455 "allow_accel_sequence": false, 00:22:31.455 "rdma_max_cq_size": 0, 00:22:31.455 "rdma_cm_event_timeout_ms": 0, 00:22:31.455 "dhchap_digests": [ 00:22:31.455 "sha256", 00:22:31.455 "sha384", 00:22:31.455 "sha512" 00:22:31.455 ], 00:22:31.455 "dhchap_dhgroups": [ 00:22:31.455 "null", 00:22:31.455 "ffdhe2048", 00:22:31.455 "ffdhe3072", 00:22:31.455 "ffdhe4096", 00:22:31.455 "ffdhe6144", 00:22:31.455 "ffdhe8192" 00:22:31.455 ] 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_nvme_set_hotplug", 00:22:31.455 "params": { 00:22:31.455 "period_us": 100000, 00:22:31.455 "enable": false 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_malloc_create", 00:22:31.455 "params": { 00:22:31.455 "name": "malloc0", 00:22:31.455 "num_blocks": 8192, 00:22:31.455 "block_size": 4096, 00:22:31.455 "physical_block_size": 4096, 00:22:31.455 "uuid": "d864885a-2e35-4020-9f8a-2094949931e9", 00:22:31.455 "optimal_io_boundary": 0, 00:22:31.455 "md_size": 0, 00:22:31.455 "dif_type": 0, 00:22:31.455 "dif_is_head_of_md": false, 00:22:31.455 "dif_pi_format": 0 00:22:31.455 } 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "method": "bdev_wait_for_examine" 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "nbd", 00:22:31.455 "config": [] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "scheduler", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "framework_set_scheduler", 00:22:31.455 "params": { 00:22:31.455 "name": "static" 00:22:31.455 } 00:22:31.455 } 00:22:31.455 ] 00:22:31.455 }, 00:22:31.455 { 00:22:31.455 "subsystem": "nvmf", 00:22:31.455 "config": [ 00:22:31.455 { 00:22:31.455 "method": "nvmf_set_config", 00:22:31.455 "params": { 00:22:31.455 "discovery_filter": "match_any", 00:22:31.455 "admin_cmd_passthru": { 00:22:31.455 "identify_ctrlr": false 00:22:31.455 }, 00:22:31.455 "dhchap_digests": [ 00:22:31.455 "sha256", 00:22:31.455 "sha384", 00:22:31.455 "sha512" 00:22:31.455 ], 00:22:31.455 "dhchap_dhgroups": [ 00:22:31.456 "null", 00:22:31.456 "ffdhe2048", 00:22:31.456 "ffdhe3072", 00:22:31.456 "ffdhe4096", 00:22:31.456 "ffdhe6144", 00:22:31.456 "ffdhe8192" 00:22:31.456 ] 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_set_max_subsystems", 00:22:31.456 "params": { 00:22:31.456 "max_subsystems": 1024 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_set_crdt", 00:22:31.456 "params": { 00:22:31.456 "crdt1": 0, 00:22:31.456 "crdt2": 0, 00:22:31.456 "crdt3": 0 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_create_transport", 00:22:31.456 "params": { 00:22:31.456 "trtype": "TCP", 00:22:31.456 "max_queue_depth": 128, 00:22:31.456 "max_io_qpairs_per_ctrlr": 127, 00:22:31.456 "in_capsule_data_size": 4096, 00:22:31.456 "max_io_size": 131072, 00:22:31.456 "io_unit_size": 131072, 00:22:31.456 "max_aq_depth": 128, 00:22:31.456 "num_shared_buffers": 511, 00:22:31.456 "buf_cache_size": 4294967295, 00:22:31.456 "dif_insert_or_strip": false, 00:22:31.456 "zcopy": false, 00:22:31.456 "c2h_success": false, 00:22:31.456 "sock_priority": 0, 00:22:31.456 "abort_timeout_sec": 1, 00:22:31.456 "ack_timeout": 0, 00:22:31.456 "data_wr_pool_size": 0 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_create_subsystem", 00:22:31.456 "params": { 00:22:31.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.456 "allow_any_host": false, 00:22:31.456 "serial_number": "00000000000000000000", 00:22:31.456 "model_number": "SPDK bdev Controller", 00:22:31.456 "max_namespaces": 32, 00:22:31.456 "min_cntlid": 1, 00:22:31.456 "max_cntlid": 65519, 00:22:31.456 "ana_reporting": false 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_subsystem_add_host", 00:22:31.456 "params": { 00:22:31.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.456 "host": "nqn.2016-06.io.spdk:host1", 00:22:31.456 "psk": "key0" 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_subsystem_add_ns", 00:22:31.456 "params": { 00:22:31.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.456 "namespace": { 00:22:31.456 "nsid": 1, 00:22:31.456 "bdev_name": "malloc0", 00:22:31.456 "nguid": "D864885A2E3540209F8A2094949931E9", 00:22:31.456 "uuid": "d864885a-2e35-4020-9f8a-2094949931e9", 00:22:31.456 "no_auto_visible": false 00:22:31.456 } 00:22:31.456 } 00:22:31.456 }, 00:22:31.456 { 00:22:31.456 "method": "nvmf_subsystem_add_listener", 00:22:31.456 "params": { 00:22:31.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.456 "listen_address": { 00:22:31.456 "trtype": "TCP", 00:22:31.456 "adrfam": "IPv4", 00:22:31.456 "traddr": "10.0.0.2", 00:22:31.456 "trsvcid": "4420" 00:22:31.456 }, 00:22:31.456 "secure_channel": false, 00:22:31.456 "sock_impl": "ssl" 00:22:31.456 } 00:22:31.456 } 00:22:31.456 ] 00:22:31.456 } 00:22:31.456 ] 00:22:31.456 }' 00:22:31.456 13:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:31.714 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:31.714 "subsystems": [ 00:22:31.714 { 00:22:31.714 "subsystem": "keyring", 00:22:31.714 "config": [ 00:22:31.714 { 00:22:31.714 "method": "keyring_file_add_key", 00:22:31.714 "params": { 00:22:31.714 "name": "key0", 00:22:31.714 "path": "/tmp/tmp.p85alnbf81" 00:22:31.714 } 00:22:31.714 } 00:22:31.714 ] 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "subsystem": "iobuf", 00:22:31.714 "config": [ 00:22:31.714 { 00:22:31.714 "method": "iobuf_set_options", 00:22:31.714 "params": { 00:22:31.714 "small_pool_count": 8192, 00:22:31.714 "large_pool_count": 1024, 00:22:31.714 "small_bufsize": 8192, 00:22:31.714 "large_bufsize": 135168 00:22:31.714 } 00:22:31.714 } 00:22:31.714 ] 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "subsystem": "sock", 00:22:31.714 "config": [ 00:22:31.714 { 00:22:31.714 "method": "sock_set_default_impl", 00:22:31.714 "params": { 00:22:31.714 "impl_name": "posix" 00:22:31.714 } 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "method": "sock_impl_set_options", 00:22:31.714 "params": { 00:22:31.714 "impl_name": "ssl", 00:22:31.714 "recv_buf_size": 4096, 00:22:31.714 "send_buf_size": 4096, 00:22:31.714 "enable_recv_pipe": true, 00:22:31.714 "enable_quickack": false, 00:22:31.714 "enable_placement_id": 0, 00:22:31.714 "enable_zerocopy_send_server": true, 00:22:31.714 "enable_zerocopy_send_client": false, 00:22:31.714 "zerocopy_threshold": 0, 00:22:31.714 "tls_version": 0, 00:22:31.714 "enable_ktls": false 00:22:31.714 } 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "method": "sock_impl_set_options", 00:22:31.714 "params": { 00:22:31.714 "impl_name": "posix", 00:22:31.714 "recv_buf_size": 2097152, 00:22:31.714 "send_buf_size": 2097152, 00:22:31.714 "enable_recv_pipe": true, 00:22:31.714 "enable_quickack": false, 00:22:31.714 "enable_placement_id": 0, 00:22:31.714 "enable_zerocopy_send_server": true, 00:22:31.714 "enable_zerocopy_send_client": false, 00:22:31.714 "zerocopy_threshold": 0, 00:22:31.714 "tls_version": 0, 00:22:31.714 "enable_ktls": false 00:22:31.714 } 00:22:31.714 } 00:22:31.714 ] 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "subsystem": "vmd", 00:22:31.714 "config": [] 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "subsystem": "accel", 00:22:31.714 "config": [ 00:22:31.714 { 00:22:31.714 "method": "accel_set_options", 00:22:31.714 "params": { 00:22:31.714 "small_cache_size": 128, 00:22:31.714 "large_cache_size": 16, 00:22:31.714 "task_count": 2048, 00:22:31.714 "sequence_count": 2048, 00:22:31.714 "buf_count": 2048 00:22:31.714 } 00:22:31.714 } 00:22:31.714 ] 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "subsystem": "bdev", 00:22:31.714 "config": [ 00:22:31.714 { 00:22:31.714 "method": "bdev_set_options", 00:22:31.714 "params": { 00:22:31.714 "bdev_io_pool_size": 65535, 00:22:31.714 "bdev_io_cache_size": 256, 00:22:31.714 "bdev_auto_examine": true, 00:22:31.714 "iobuf_small_cache_size": 128, 00:22:31.714 "iobuf_large_cache_size": 16 00:22:31.714 } 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "method": "bdev_raid_set_options", 00:22:31.714 "params": { 00:22:31.714 "process_window_size_kb": 1024, 00:22:31.714 "process_max_bandwidth_mb_sec": 0 00:22:31.714 } 00:22:31.714 }, 00:22:31.714 { 00:22:31.714 "method": "bdev_iscsi_set_options", 00:22:31.714 "params": { 00:22:31.714 "timeout_sec": 30 00:22:31.714 } 00:22:31.714 }, 00:22:31.715 { 00:22:31.715 "method": "bdev_nvme_set_options", 00:22:31.715 "params": { 00:22:31.715 "action_on_timeout": "none", 00:22:31.715 "timeout_us": 0, 00:22:31.715 "timeout_admin_us": 0, 00:22:31.715 "keep_alive_timeout_ms": 10000, 00:22:31.715 "arbitration_burst": 0, 00:22:31.715 "low_priority_weight": 0, 00:22:31.715 "medium_priority_weight": 0, 00:22:31.715 "high_priority_weight": 0, 00:22:31.715 "nvme_adminq_poll_period_us": 10000, 00:22:31.715 "nvme_ioq_poll_period_us": 0, 00:22:31.715 "io_queue_requests": 512, 00:22:31.715 "delay_cmd_submit": true, 00:22:31.715 "transport_retry_count": 4, 00:22:31.715 "bdev_retry_count": 3, 00:22:31.715 "transport_ack_timeout": 0, 00:22:31.715 "ctrlr_loss_timeout_sec": 0, 00:22:31.715 "reconnect_delay_sec": 0, 00:22:31.715 "fast_io_fail_timeout_sec": 0, 00:22:31.715 "disable_auto_failback": false, 00:22:31.715 "generate_uuids": false, 00:22:31.715 "transport_tos": 0, 00:22:31.715 "nvme_error_stat": false, 00:22:31.715 "rdma_srq_size": 0, 00:22:31.715 "io_path_stat": false, 00:22:31.715 "allow_accel_sequence": false, 00:22:31.715 "rdma_max_cq_size": 0, 00:22:31.715 "rdma_cm_event_timeout_ms": 0, 00:22:31.715 "dhchap_digests": [ 00:22:31.715 "sha256", 00:22:31.715 "sha384", 00:22:31.715 "sha512" 00:22:31.715 ], 00:22:31.715 "dhchap_dhgroups": [ 00:22:31.715 "null", 00:22:31.715 "ffdhe2048", 00:22:31.715 "ffdhe3072", 00:22:31.715 "ffdhe4096", 00:22:31.715 "ffdhe6144", 00:22:31.715 "ffdhe8192" 00:22:31.715 ] 00:22:31.715 } 00:22:31.715 }, 00:22:31.715 { 00:22:31.715 "method": "bdev_nvme_attach_controller", 00:22:31.715 "params": { 00:22:31.715 "name": "nvme0", 00:22:31.715 "trtype": "TCP", 00:22:31.715 "adrfam": "IPv4", 00:22:31.715 "traddr": "10.0.0.2", 00:22:31.715 "trsvcid": "4420", 00:22:31.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.715 "prchk_reftag": false, 00:22:31.715 "prchk_guard": false, 00:22:31.715 "ctrlr_loss_timeout_sec": 0, 00:22:31.715 "reconnect_delay_sec": 0, 00:22:31.715 "fast_io_fail_timeout_sec": 0, 00:22:31.715 "psk": "key0", 00:22:31.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.715 "hdgst": false, 00:22:31.715 "ddgst": false, 00:22:31.715 "multipath": "multipath" 00:22:31.715 } 00:22:31.715 }, 00:22:31.715 { 00:22:31.715 "method": "bdev_nvme_set_hotplug", 00:22:31.715 "params": { 00:22:31.715 "period_us": 100000, 00:22:31.715 "enable": false 00:22:31.715 } 00:22:31.715 }, 00:22:31.715 { 00:22:31.715 "method": "bdev_enable_histogram", 00:22:31.715 "params": { 00:22:31.715 "name": "nvme0n1", 00:22:31.715 "enable": true 00:22:31.715 } 00:22:31.715 }, 00:22:31.715 { 00:22:31.715 "method": "bdev_wait_for_examine" 00:22:31.715 } 00:22:31.715 ] 00:22:31.715 }, 00:22:31.715 { 00:22:31.715 "subsystem": "nbd", 00:22:31.715 "config": [] 00:22:31.715 } 00:22:31.715 ] 00:22:31.715 }' 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1831394 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1831394 ']' 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1831394 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1831394 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1831394' 00:22:31.715 killing process with pid 1831394 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1831394 00:22:31.715 Received shutdown signal, test time was about 1.000000 seconds 00:22:31.715 00:22:31.715 Latency(us) 00:22:31.715 [2024-10-07T11:33:13.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.715 [2024-10-07T11:33:13.427Z] =================================================================================================================== 00:22:31.715 [2024-10-07T11:33:13.427Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:31.715 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1831394 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1831363 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1831363 ']' 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1831363 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1831363 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1831363' 00:22:31.973 killing process with pid 1831363 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1831363 00:22:31.973 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1831363 00:22:32.232 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:32.232 "subsystems": [ 00:22:32.232 { 00:22:32.232 "subsystem": "keyring", 00:22:32.232 "config": [ 00:22:32.232 { 00:22:32.232 "method": "keyring_file_add_key", 00:22:32.232 "params": { 00:22:32.232 "name": "key0", 00:22:32.232 "path": "/tmp/tmp.p85alnbf81" 00:22:32.232 } 00:22:32.232 } 00:22:32.232 ] 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "subsystem": "iobuf", 00:22:32.232 "config": [ 00:22:32.232 { 00:22:32.232 "method": "iobuf_set_options", 00:22:32.232 "params": { 00:22:32.232 "small_pool_count": 8192, 00:22:32.232 "large_pool_count": 1024, 00:22:32.232 "small_bufsize": 8192, 00:22:32.232 "large_bufsize": 135168 00:22:32.232 } 00:22:32.232 } 00:22:32.232 ] 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "subsystem": "sock", 00:22:32.232 "config": [ 00:22:32.232 { 00:22:32.232 "method": "sock_set_default_impl", 00:22:32.232 "params": { 00:22:32.232 "impl_name": "posix" 00:22:32.232 } 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "method": "sock_impl_set_options", 00:22:32.232 "params": { 00:22:32.232 "impl_name": "ssl", 00:22:32.232 "recv_buf_size": 4096, 00:22:32.232 "send_buf_size": 4096, 00:22:32.232 "enable_recv_pipe": true, 00:22:32.232 "enable_quickack": false, 00:22:32.232 "enable_placement_id": 0, 00:22:32.232 "enable_zerocopy_send_server": true, 00:22:32.232 "enable_zerocopy_send_client": false, 00:22:32.232 "zerocopy_threshold": 0, 00:22:32.232 "tls_version": 0, 00:22:32.232 "enable_ktls": false 00:22:32.232 } 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "method": "sock_impl_set_options", 00:22:32.232 "params": { 00:22:32.232 "impl_name": "posix", 00:22:32.232 "recv_buf_size": 2097152, 00:22:32.232 "send_buf_size": 2097152, 00:22:32.232 "enable_recv_pipe": true, 00:22:32.232 "enable_quickack": false, 00:22:32.232 "enable_placement_id": 0, 00:22:32.232 "enable_zerocopy_send_server": true, 00:22:32.232 "enable_zerocopy_send_client": false, 00:22:32.232 "zerocopy_threshold": 0, 00:22:32.232 "tls_version": 0, 00:22:32.232 "enable_ktls": false 00:22:32.232 } 00:22:32.232 } 00:22:32.232 ] 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "subsystem": "vmd", 00:22:32.232 "config": [] 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "subsystem": "accel", 00:22:32.232 "config": [ 00:22:32.232 { 00:22:32.232 "method": "accel_set_options", 00:22:32.232 "params": { 00:22:32.232 "small_cache_size": 128, 00:22:32.232 "large_cache_size": 16, 00:22:32.232 "task_count": 2048, 00:22:32.232 "sequence_count": 2048, 00:22:32.232 "buf_count": 2048 00:22:32.232 } 00:22:32.232 } 00:22:32.232 ] 00:22:32.232 }, 00:22:32.232 { 00:22:32.232 "subsystem": "bdev", 00:22:32.232 "config": [ 00:22:32.232 { 00:22:32.232 "method": "bdev_set_options", 00:22:32.232 "params": { 00:22:32.232 "bdev_io_pool_size": 65535, 00:22:32.232 "bdev_io_cache_size": 256, 00:22:32.232 "bdev_auto_examine": true, 00:22:32.232 "iobuf_small_cache_size": 128, 00:22:32.233 "iobuf_large_cache_size": 16 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_raid_set_options", 00:22:32.233 "params": { 00:22:32.233 "process_window_size_kb": 1024, 00:22:32.233 "process_max_bandwidth_mb_sec": 0 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_iscsi_set_options", 00:22:32.233 "params": { 00:22:32.233 "timeout_sec": 30 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_nvme_set_options", 00:22:32.233 "params": { 00:22:32.233 "action_on_timeout": "none", 00:22:32.233 "timeout_us": 0, 00:22:32.233 "timeout_admin_us": 0, 00:22:32.233 "keep_alive_timeout_ms": 10000, 00:22:32.233 "arbitration_burst": 0, 00:22:32.233 "low_priority_weight": 0, 00:22:32.233 "medium_priority_weight": 0, 00:22:32.233 "high_priority_weight": 0, 00:22:32.233 "nvme_adminq_poll_period_us": 10000, 00:22:32.233 "nvme_ioq_poll_period_us": 0, 00:22:32.233 "io_queue_requests": 0, 00:22:32.233 "delay_cmd_submit": true, 00:22:32.233 "transport_retry_count": 4, 00:22:32.233 "bdev_retry_count": 3, 00:22:32.233 "transport_ack_timeout": 0, 00:22:32.233 "ctrlr_loss_timeout_sec": 0, 00:22:32.233 "reconnect_delay_sec": 0, 00:22:32.233 "fast_io_fail_timeout_sec": 0, 00:22:32.233 "disable_auto_failback": false, 00:22:32.233 "generate_uuids": false, 00:22:32.233 "transport_tos": 0, 00:22:32.233 "nvme_error_stat": false, 00:22:32.233 "rdma_srq_size": 0, 00:22:32.233 "io_path_stat": false, 00:22:32.233 "allow_accel_sequence": false, 00:22:32.233 "rdma_max_cq_size": 0, 00:22:32.233 "rdma_cm_event_timeout_ms": 0, 00:22:32.233 "dhchap_digests": [ 00:22:32.233 "sha256", 00:22:32.233 "sha384", 00:22:32.233 "sha512" 00:22:32.233 ], 00:22:32.233 "dhchap_dhgroups": [ 00:22:32.233 "null", 00:22:32.233 "ffdhe2048", 00:22:32.233 "ffdhe3072", 00:22:32.233 "ffdhe4096", 00:22:32.233 "ffdhe6144", 00:22:32.233 "ffdhe8192" 00:22:32.233 ] 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_nvme_set_hotplug", 00:22:32.233 "params": { 00:22:32.233 "period_us": 100000, 00:22:32.233 "enable": false 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_malloc_create", 00:22:32.233 "params": { 00:22:32.233 "name": "malloc0", 00:22:32.233 "num_blocks": 8192, 00:22:32.233 "block_size": 4096, 00:22:32.233 "physical_block_size": 4096, 00:22:32.233 "uuid": "d864885a-2e35-4020-9f8a-2094949931e9", 00:22:32.233 "optimal_io_boundary": 0, 00:22:32.233 "md_size": 0, 00:22:32.233 "dif_type": 0, 00:22:32.233 "dif_is_head_of_md": false, 00:22:32.233 "dif_pi_format": 0 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "bdev_wait_for_examine" 00:22:32.233 } 00:22:32.233 ] 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "subsystem": "nbd", 00:22:32.233 "config": [] 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "subsystem": "scheduler", 00:22:32.233 "config": [ 00:22:32.233 { 00:22:32.233 "method": "framework_set_scheduler", 00:22:32.233 "params": { 00:22:32.233 "name": "static" 00:22:32.233 } 00:22:32.233 } 00:22:32.233 ] 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "subsystem": "nvmf", 00:22:32.233 "config": [ 00:22:32.233 { 00:22:32.233 "method": "nvmf_set_config", 00:22:32.233 "params": { 00:22:32.233 "discovery_filter": "match_any", 00:22:32.233 "admin_cmd_passthru": { 00:22:32.233 "identify_ctrlr": false 00:22:32.233 }, 00:22:32.233 "dhchap_digests": [ 00:22:32.233 "sha256", 00:22:32.233 "sha384", 00:22:32.233 "sha512" 00:22:32.233 ], 00:22:32.233 "dhchap_dhgroups": [ 00:22:32.233 "null", 00:22:32.233 "ffdhe2048", 00:22:32.233 "ffdhe3072", 00:22:32.233 "ffdhe4096", 00:22:32.233 "ffdhe6144", 00:22:32.233 "ffdhe8192" 00:22:32.233 ] 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_set_max_subsystems", 00:22:32.233 "params": { 00:22:32.233 "max_subsystems": 1024 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_set_crdt", 00:22:32.233 "params": { 00:22:32.233 "crdt1": 0, 00:22:32.233 "crdt2": 0, 00:22:32.233 "crdt3": 0 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_create_transport", 00:22:32.233 "params": { 00:22:32.233 "trtype": "TCP", 00:22:32.233 "max_queue_depth": 128, 00:22:32.233 "max_io_qpairs_per_ctrlr": 127, 00:22:32.233 "in_capsule_data_size": 4096, 00:22:32.233 "max_io_size": 131072, 00:22:32.233 "io_unit_size": 131072, 00:22:32.233 "max_aq_depth": 128, 00:22:32.233 "num_shared_buffers": 511, 00:22:32.233 "buf_cache_size": 4294967295, 00:22:32.233 "dif_insert_or_strip": false, 00:22:32.233 "zcopy": false, 00:22:32.233 "c2h_success": false, 00:22:32.233 "sock_priority": 0, 00:22:32.233 "abort_timeout_sec": 1, 00:22:32.233 "ack_timeout": 0, 00:22:32.233 "data_wr_pool_size": 0 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_create_subsystem", 00:22:32.233 "params": { 00:22:32.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.233 "allow_any_host": false, 00:22:32.233 "serial_number": "00000000000000000000", 00:22:32.233 "model_number": "SPDK bdev Controller", 00:22:32.233 "max_namespaces": 32, 00:22:32.233 "min_cntlid": 1, 00:22:32.233 "max_cntlid": 65519, 00:22:32.233 "ana_reporting": false 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_subsystem_add_host", 00:22:32.233 "params": { 00:22:32.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.233 "host": "nqn.2016-06.io.spdk:host1", 00:22:32.233 "psk": "key0" 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_subsystem_add_ns", 00:22:32.233 "params": { 00:22:32.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.233 "namespace": { 00:22:32.233 "nsid": 1, 00:22:32.233 "bdev_name": "malloc0", 00:22:32.233 "nguid": "D864885A2E3540209F8A2094949931E9", 00:22:32.233 "uuid": "d864885a-2e35-4020-9f8a-2094949931e9", 00:22:32.233 "no_auto_visible": false 00:22:32.233 } 00:22:32.233 } 00:22:32.233 }, 00:22:32.233 { 00:22:32.233 "method": "nvmf_subsystem_add_listener", 00:22:32.233 "params": { 00:22:32.233 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:32.233 "listen_address": { 00:22:32.233 "trtype": "TCP", 00:22:32.233 "adrfam": "IPv4", 00:22:32.233 "traddr": "10.0.0.2", 00:22:32.233 "trsvcid": "4420" 00:22:32.233 }, 00:22:32.233 "secure_channel": false, 00:22:32.233 "sock_impl": "ssl" 00:22:32.233 } 00:22:32.233 } 00:22:32.233 ] 00:22:32.233 } 00:22:32.233 ] 00:22:32.233 }' 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # nvmfpid=1831784 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # waitforlisten 1831784 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1831784 ']' 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:32.233 13:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.492 [2024-10-07 13:33:13.986512] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:32.492 [2024-10-07 13:33:13.986590] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.492 [2024-10-07 13:33:14.047087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.492 [2024-10-07 13:33:14.152059] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.492 [2024-10-07 13:33:14.152115] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.492 [2024-10-07 13:33:14.152128] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.492 [2024-10-07 13:33:14.152140] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.492 [2024-10-07 13:33:14.152150] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.492 [2024-10-07 13:33:14.152733] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.750 [2024-10-07 13:33:14.408924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.750 [2024-10-07 13:33:14.440939] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:32.750 [2024-10-07 13:33:14.441221] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.316 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:33.316 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:33.316 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:33.316 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:33.316 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1831929 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1831929 /var/tmp/bdevperf.sock 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 1831929 ']' 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.574 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:33.574 "subsystems": [ 00:22:33.574 { 00:22:33.574 "subsystem": "keyring", 00:22:33.574 "config": [ 00:22:33.574 { 00:22:33.574 "method": "keyring_file_add_key", 00:22:33.574 "params": { 00:22:33.574 "name": "key0", 00:22:33.574 "path": "/tmp/tmp.p85alnbf81" 00:22:33.574 } 00:22:33.574 } 00:22:33.575 ] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "iobuf", 00:22:33.575 "config": [ 00:22:33.575 { 00:22:33.575 "method": "iobuf_set_options", 00:22:33.575 "params": { 00:22:33.575 "small_pool_count": 8192, 00:22:33.575 "large_pool_count": 1024, 00:22:33.575 "small_bufsize": 8192, 00:22:33.575 "large_bufsize": 135168 00:22:33.575 } 00:22:33.575 } 00:22:33.575 ] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "sock", 00:22:33.575 "config": [ 00:22:33.575 { 00:22:33.575 "method": "sock_set_default_impl", 00:22:33.575 "params": { 00:22:33.575 "impl_name": "posix" 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "sock_impl_set_options", 00:22:33.575 "params": { 00:22:33.575 "impl_name": "ssl", 00:22:33.575 "recv_buf_size": 4096, 00:22:33.575 "send_buf_size": 4096, 00:22:33.575 "enable_recv_pipe": true, 00:22:33.575 "enable_quickack": false, 00:22:33.575 "enable_placement_id": 0, 00:22:33.575 "enable_zerocopy_send_server": true, 00:22:33.575 "enable_zerocopy_send_client": false, 00:22:33.575 "zerocopy_threshold": 0, 00:22:33.575 "tls_version": 0, 00:22:33.575 "enable_ktls": false 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "sock_impl_set_options", 00:22:33.575 "params": { 00:22:33.575 "impl_name": "posix", 00:22:33.575 "recv_buf_size": 2097152, 00:22:33.575 "send_buf_size": 2097152, 00:22:33.575 "enable_recv_pipe": true, 00:22:33.575 "enable_quickack": false, 00:22:33.575 "enable_placement_id": 0, 00:22:33.575 "enable_zerocopy_send_server": true, 00:22:33.575 "enable_zerocopy_send_client": false, 00:22:33.575 "zerocopy_threshold": 0, 00:22:33.575 "tls_version": 0, 00:22:33.575 "enable_ktls": false 00:22:33.575 } 00:22:33.575 } 00:22:33.575 ] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "vmd", 00:22:33.575 "config": [] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "accel", 00:22:33.575 "config": [ 00:22:33.575 { 00:22:33.575 "method": "accel_set_options", 00:22:33.575 "params": { 00:22:33.575 "small_cache_size": 128, 00:22:33.575 "large_cache_size": 16, 00:22:33.575 "task_count": 2048, 00:22:33.575 "sequence_count": 2048, 00:22:33.575 "buf_count": 2048 00:22:33.575 } 00:22:33.575 } 00:22:33.575 ] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "bdev", 00:22:33.575 "config": [ 00:22:33.575 { 00:22:33.575 "method": "bdev_set_options", 00:22:33.575 "params": { 00:22:33.575 "bdev_io_pool_size": 65535, 00:22:33.575 "bdev_io_cache_size": 256, 00:22:33.575 "bdev_auto_examine": true, 00:22:33.575 "iobuf_small_cache_size": 128, 00:22:33.575 "iobuf_large_cache_size": 16 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_raid_set_options", 00:22:33.575 "params": { 00:22:33.575 "process_window_size_kb": 1024, 00:22:33.575 "process_max_bandwidth_mb_sec": 0 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_iscsi_set_options", 00:22:33.575 "params": { 00:22:33.575 "timeout_sec": 30 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_nvme_set_options", 00:22:33.575 "params": { 00:22:33.575 "action_on_timeout": "none", 00:22:33.575 "timeout_us": 0, 00:22:33.575 "timeout_admin_us": 0, 00:22:33.575 "keep_alive_timeout_ms": 10000, 00:22:33.575 "arbitration_burst": 0, 00:22:33.575 "low_priority_weight": 0, 00:22:33.575 "medium_priority_weight": 0, 00:22:33.575 "high_priority_weight": 0, 00:22:33.575 "nvme_adminq_poll_period_us": 10000, 00:22:33.575 "nvme_ioq_poll_period_us": 0, 00:22:33.575 "io_queue_requests": 512, 00:22:33.575 "delay_cmd_submit": true, 00:22:33.575 "transport_retry_count": 4, 00:22:33.575 "bdev_retry_count": 3, 00:22:33.575 "transport_ack_timeout": 0, 00:22:33.575 "ctrlr_loss_timeout_sec": 0, 00:22:33.575 "reconnect_delay_sec": 0, 00:22:33.575 "fast_io_fail_timeout_sec": 0, 00:22:33.575 "disable_auto_failback": false, 00:22:33.575 "generate_uuids": false, 00:22:33.575 "transport_tos": 0, 00:22:33.575 "nvme_error_stat": false, 00:22:33.575 "rdma_srq_size": 0, 00:22:33.575 "io_path_stat": false, 00:22:33.575 "allow_accel_sequence": false, 00:22:33.575 "rdma_max_cq_size": 0, 00:22:33.575 "rdma_cm_event_timeout_ms": 0, 00:22:33.575 "dhchap_digests": [ 00:22:33.575 "sha256", 00:22:33.575 "sha384", 00:22:33.575 "sha512" 00:22:33.575 ], 00:22:33.575 "dhchap_dhgroups": [ 00:22:33.575 "null", 00:22:33.575 "ffdhe2048", 00:22:33.575 "ffdhe3072", 00:22:33.575 "ffdhe4096", 00:22:33.575 "ffdhe6144", 00:22:33.575 "ffdhe8192" 00:22:33.575 ] 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_nvme_attach_controller", 00:22:33.575 "params": { 00:22:33.575 "name": "nvme0", 00:22:33.575 "trtype": "TCP", 00:22:33.575 "adrfam": "IPv4", 00:22:33.575 "traddr": "10.0.0.2", 00:22:33.575 "trsvcid": "4420", 00:22:33.575 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.575 "prchk_reftag": false, 00:22:33.575 "prchk_guard": false, 00:22:33.575 "ctrlr_loss_timeout_sec": 0, 00:22:33.575 "reconnect_delay_sec": 0, 00:22:33.575 "fast_io_fail_timeout_sec": 0, 00:22:33.575 "psk": "key0", 00:22:33.575 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:33.575 "hdgst": false, 00:22:33.575 "ddgst": false, 00:22:33.575 "multipath": "multipath" 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_nvme_set_hotplug", 00:22:33.575 "params": { 00:22:33.575 "period_us": 100000, 00:22:33.575 "enable": false 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_enable_histogram", 00:22:33.575 "params": { 00:22:33.575 "name": "nvme0n1", 00:22:33.575 "enable": true 00:22:33.575 } 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "method": "bdev_wait_for_examine" 00:22:33.575 } 00:22:33.575 ] 00:22:33.575 }, 00:22:33.575 { 00:22:33.575 "subsystem": "nbd", 00:22:33.575 "config": [] 00:22:33.575 } 00:22:33.575 ] 00:22:33.575 }' 00:22:33.575 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:33.575 13:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.575 [2024-10-07 13:33:15.096810] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:33.575 [2024-10-07 13:33:15.096900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831929 ] 00:22:33.575 [2024-10-07 13:33:15.151575] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.575 [2024-10-07 13:33:15.256737] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.833 [2024-10-07 13:33:15.434700] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.398 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:34.398 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:22:34.398 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.398 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:34.656 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.656 13:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.914 Running I/O for 1 seconds... 00:22:35.876 3393.00 IOPS, 13.25 MiB/s 00:22:35.876 Latency(us) 00:22:35.876 [2024-10-07T11:33:17.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.876 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:35.876 Verification LBA range: start 0x0 length 0x2000 00:22:35.876 nvme0n1 : 1.02 3439.21 13.43 0.00 0.00 36805.27 6359.42 31457.28 00:22:35.876 [2024-10-07T11:33:17.588Z] =================================================================================================================== 00:22:35.876 [2024-10-07T11:33:17.588Z] Total : 3439.21 13.43 0.00 0.00 36805.27 6359.42 31457.28 00:22:35.876 { 00:22:35.876 "results": [ 00:22:35.876 { 00:22:35.876 "job": "nvme0n1", 00:22:35.876 "core_mask": "0x2", 00:22:35.876 "workload": "verify", 00:22:35.876 "status": "finished", 00:22:35.876 "verify_range": { 00:22:35.876 "start": 0, 00:22:35.876 "length": 8192 00:22:35.876 }, 00:22:35.876 "queue_depth": 128, 00:22:35.876 "io_size": 4096, 00:22:35.876 "runtime": 1.023781, 00:22:35.876 "iops": 3439.2120971184268, 00:22:35.876 "mibps": 13.434422254368855, 00:22:35.876 "io_failed": 0, 00:22:35.876 "io_timeout": 0, 00:22:35.876 "avg_latency_us": 36805.267702146906, 00:22:35.876 "min_latency_us": 6359.419259259259, 00:22:35.876 "max_latency_us": 31457.28 00:22:35.876 } 00:22:35.876 ], 00:22:35.876 "core_count": 1 00:22:35.876 } 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:35.876 nvmf_trace.0 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1831929 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1831929 ']' 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1831929 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:35.876 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1831929 00:22:36.134 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:36.134 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:36.134 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1831929' 00:22:36.134 killing process with pid 1831929 00:22:36.134 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1831929 00:22:36.134 Received shutdown signal, test time was about 1.000000 seconds 00:22:36.134 00:22:36.134 Latency(us) 00:22:36.134 [2024-10-07T11:33:17.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.134 [2024-10-07T11:33:17.846Z] =================================================================================================================== 00:22:36.134 [2024-10-07T11:33:17.846Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:36.134 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1831929 00:22:36.392 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:36.393 rmmod nvme_tcp 00:22:36.393 rmmod nvme_fabrics 00:22:36.393 rmmod nvme_keyring 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@515 -- # '[' -n 1831784 ']' 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # killprocess 1831784 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 1831784 ']' 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 1831784 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1831784 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1831784' 00:22:36.393 killing process with pid 1831784 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 1831784 00:22:36.393 13:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 1831784 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-save 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@789 -- # iptables-restore 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.651 13:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.7VwxzkbOdt /tmp/tmp.xW1iH6Yofa /tmp/tmp.p85alnbf81 00:22:39.188 00:22:39.188 real 1m25.292s 00:22:39.188 user 2m23.267s 00:22:39.188 sys 0m25.230s 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.188 ************************************ 00:22:39.188 END TEST nvmf_tls 00:22:39.188 ************************************ 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:39.188 ************************************ 00:22:39.188 START TEST nvmf_fips 00:22:39.188 ************************************ 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:39.188 * Looking for test storage... 00:22:39.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.188 --rc genhtml_branch_coverage=1 00:22:39.188 --rc genhtml_function_coverage=1 00:22:39.188 --rc genhtml_legend=1 00:22:39.188 --rc geninfo_all_blocks=1 00:22:39.188 --rc geninfo_unexecuted_blocks=1 00:22:39.188 00:22:39.188 ' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.188 --rc genhtml_branch_coverage=1 00:22:39.188 --rc genhtml_function_coverage=1 00:22:39.188 --rc genhtml_legend=1 00:22:39.188 --rc geninfo_all_blocks=1 00:22:39.188 --rc geninfo_unexecuted_blocks=1 00:22:39.188 00:22:39.188 ' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.188 --rc genhtml_branch_coverage=1 00:22:39.188 --rc genhtml_function_coverage=1 00:22:39.188 --rc genhtml_legend=1 00:22:39.188 --rc geninfo_all_blocks=1 00:22:39.188 --rc geninfo_unexecuted_blocks=1 00:22:39.188 00:22:39.188 ' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:39.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:39.188 --rc genhtml_branch_coverage=1 00:22:39.188 --rc genhtml_function_coverage=1 00:22:39.188 --rc genhtml_legend=1 00:22:39.188 --rc geninfo_all_blocks=1 00:22:39.188 --rc geninfo_unexecuted_blocks=1 00:22:39.188 00:22:39.188 ' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.188 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:39.189 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:39.189 Error setting digest 00:22:39.189 409212831D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:39.189 409212831D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:39.189 13:33:20 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:41.094 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:41.094 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:41.094 Found net devices under 0000:09:00.0: cvl_0_0 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:41.094 Found net devices under 0000:09:00.1: cvl_0_1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # is_hw=yes 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:41.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:22:41.094 00:22:41.094 --- 10.0.0.2 ping statistics --- 00:22:41.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.094 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:41.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:22:41.094 00:22:41.094 --- 10.0.0.1 ping statistics --- 00:22:41.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.094 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # return 0 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:41.094 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # nvmfpid=1834189 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # waitforlisten 1834189 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1834189 ']' 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.095 13:33:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.354 [2024-10-07 13:33:22.846798] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:41.354 [2024-10-07 13:33:22.846888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.354 [2024-10-07 13:33:22.907244] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.354 [2024-10-07 13:33:23.015410] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.354 [2024-10-07 13:33:23.015483] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.354 [2024-10-07 13:33:23.015496] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.354 [2024-10-07 13:33:23.015507] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.354 [2024-10-07 13:33:23.015516] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.354 [2024-10-07 13:33:23.016138] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ASU 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ASU 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ASU 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ASU 00:22:41.612 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:41.870 [2024-10-07 13:33:23.469549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.870 [2024-10-07 13:33:23.485556] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:41.870 [2024-10-07 13:33:23.485843] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.870 malloc0 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1834284 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1834284 /var/tmp/bdevperf.sock 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 1834284 ']' 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:41.870 13:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.128 [2024-10-07 13:33:23.642938] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:42.128 [2024-10-07 13:33:23.643047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834284 ] 00:22:42.128 [2024-10-07 13:33:23.708110] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.128 [2024-10-07 13:33:23.813437] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:22:43.062 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:43.062 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:22:43.062 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ASU 00:22:43.319 13:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:43.577 [2024-10-07 13:33:25.170128] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:43.577 TLSTESTn1 00:22:43.577 13:33:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:43.835 Running I/O for 10 seconds... 00:22:45.715 3484.00 IOPS, 13.61 MiB/s [2024-10-07T11:33:28.801Z] 3462.00 IOPS, 13.52 MiB/s [2024-10-07T11:33:29.734Z] 3481.67 IOPS, 13.60 MiB/s [2024-10-07T11:33:30.665Z] 3467.25 IOPS, 13.54 MiB/s [2024-10-07T11:33:31.596Z] 3481.80 IOPS, 13.60 MiB/s [2024-10-07T11:33:32.526Z] 3485.00 IOPS, 13.61 MiB/s [2024-10-07T11:33:33.458Z] 3486.14 IOPS, 13.62 MiB/s [2024-10-07T11:33:34.831Z] 3483.00 IOPS, 13.61 MiB/s [2024-10-07T11:33:35.764Z] 3470.78 IOPS, 13.56 MiB/s [2024-10-07T11:33:35.764Z] 3470.50 IOPS, 13.56 MiB/s 00:22:54.052 Latency(us) 00:22:54.052 [2024-10-07T11:33:35.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.052 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:54.052 Verification LBA range: start 0x0 length 0x2000 00:22:54.052 TLSTESTn1 : 10.02 3476.14 13.58 0.00 0.00 36761.50 7670.14 30486.38 00:22:54.052 [2024-10-07T11:33:35.764Z] =================================================================================================================== 00:22:54.052 [2024-10-07T11:33:35.764Z] Total : 3476.14 13.58 0.00 0.00 36761.50 7670.14 30486.38 00:22:54.052 { 00:22:54.052 "results": [ 00:22:54.052 { 00:22:54.052 "job": "TLSTESTn1", 00:22:54.052 "core_mask": "0x4", 00:22:54.052 "workload": "verify", 00:22:54.052 "status": "finished", 00:22:54.052 "verify_range": { 00:22:54.052 "start": 0, 00:22:54.052 "length": 8192 00:22:54.052 }, 00:22:54.052 "queue_depth": 128, 00:22:54.052 "io_size": 4096, 00:22:54.052 "runtime": 10.020025, 00:22:54.052 "iops": 3476.1390315892427, 00:22:54.052 "mibps": 13.57866809214548, 00:22:54.052 "io_failed": 0, 00:22:54.052 "io_timeout": 0, 00:22:54.052 "avg_latency_us": 36761.50176345678, 00:22:54.052 "min_latency_us": 7670.139259259259, 00:22:54.052 "max_latency_us": 30486.376296296297 00:22:54.052 } 00:22:54.052 ], 00:22:54.052 "core_count": 1 00:22:54.052 } 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:54.052 nvmf_trace.0 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1834284 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1834284 ']' 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1834284 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1834284 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1834284' 00:22:54.052 killing process with pid 1834284 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1834284 00:22:54.052 Received shutdown signal, test time was about 10.000000 seconds 00:22:54.052 00:22:54.052 Latency(us) 00:22:54.052 [2024-10-07T11:33:35.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.052 [2024-10-07T11:33:35.764Z] =================================================================================================================== 00:22:54.052 [2024-10-07T11:33:35.764Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.052 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1834284 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # nvmfcleanup 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:54.310 rmmod nvme_tcp 00:22:54.310 rmmod nvme_fabrics 00:22:54.310 rmmod nvme_keyring 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@515 -- # '[' -n 1834189 ']' 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # killprocess 1834189 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 1834189 ']' 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 1834189 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1834189 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1834189' 00:22:54.310 killing process with pid 1834189 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 1834189 00:22:54.310 13:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 1834189 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-save 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:22:54.568 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@789 -- # iptables-restore 00:22:54.569 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:54.569 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:54.569 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.569 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.569 13:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ASU 00:22:57.104 00:22:57.104 real 0m17.909s 00:22:57.104 user 0m24.489s 00:22:57.104 sys 0m5.467s 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:57.104 ************************************ 00:22:57.104 END TEST nvmf_fips 00:22:57.104 ************************************ 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:57.104 ************************************ 00:22:57.104 START TEST nvmf_control_msg_list 00:22:57.104 ************************************ 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:57.104 * Looking for test storage... 00:22:57.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.104 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:57.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.105 --rc genhtml_branch_coverage=1 00:22:57.105 --rc genhtml_function_coverage=1 00:22:57.105 --rc genhtml_legend=1 00:22:57.105 --rc geninfo_all_blocks=1 00:22:57.105 --rc geninfo_unexecuted_blocks=1 00:22:57.105 00:22:57.105 ' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:57.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.105 --rc genhtml_branch_coverage=1 00:22:57.105 --rc genhtml_function_coverage=1 00:22:57.105 --rc genhtml_legend=1 00:22:57.105 --rc geninfo_all_blocks=1 00:22:57.105 --rc geninfo_unexecuted_blocks=1 00:22:57.105 00:22:57.105 ' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:57.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.105 --rc genhtml_branch_coverage=1 00:22:57.105 --rc genhtml_function_coverage=1 00:22:57.105 --rc genhtml_legend=1 00:22:57.105 --rc geninfo_all_blocks=1 00:22:57.105 --rc geninfo_unexecuted_blocks=1 00:22:57.105 00:22:57.105 ' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:57.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.105 --rc genhtml_branch_coverage=1 00:22:57.105 --rc genhtml_function_coverage=1 00:22:57.105 --rc genhtml_legend=1 00:22:57.105 --rc geninfo_all_blocks=1 00:22:57.105 --rc geninfo_unexecuted_blocks=1 00:22:57.105 00:22:57.105 ' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:57.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # prepare_net_devs 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # local -g is_hw=no 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # remove_spdk_ns 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:22:57.105 13:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.009 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:22:59.010 Found 0000:09:00.0 (0x8086 - 0x1592) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:22:59.010 Found 0000:09:00.1 (0x8086 - 0x1592) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:59.010 Found net devices under 0000:09:00.0: cvl_0_0 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ up == up ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:59.010 Found net devices under 0000:09:00.1: cvl_0_1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # is_hw=yes 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.010 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.010 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:22:59.010 00:22:59.010 --- 10.0.0.2 ping statistics --- 00:22:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.010 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.010 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.010 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:59.010 00:22:59.010 --- 10.0.0.1 ping statistics --- 00:22:59.010 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.010 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@448 -- # return 0 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:59.010 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # nvmfpid=1837511 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # waitforlisten 1837511 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 1837511 ']' 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:59.011 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.011 [2024-10-07 13:33:40.619196] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:22:59.011 [2024-10-07 13:33:40.619271] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.011 [2024-10-07 13:33:40.683135] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.269 [2024-10-07 13:33:40.791308] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.269 [2024-10-07 13:33:40.791366] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.269 [2024-10-07 13:33:40.791388] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.269 [2024-10-07 13:33:40.791399] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.269 [2024-10-07 13:33:40.791416] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.269 [2024-10-07 13:33:40.792045] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.269 [2024-10-07 13:33:40.933555] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.269 Malloc0 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.269 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:59.527 [2024-10-07 13:33:40.984809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.527 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1837588 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1837589 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1837590 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1837588 00:22:59.528 13:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.528 [2024-10-07 13:33:41.043237] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:59.528 [2024-10-07 13:33:41.053623] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:59.528 [2024-10-07 13:33:41.053867] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:00.460 Initializing NVMe Controllers 00:23:00.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:00.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:00.460 Initialization complete. Launching workers. 00:23:00.460 ======================================================== 00:23:00.460 Latency(us) 00:23:00.460 Device Information : IOPS MiB/s Average min max 00:23:00.460 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5431.93 21.22 183.70 154.41 40694.67 00:23:00.460 ======================================================== 00:23:00.460 Total : 5431.93 21.22 183.70 154.41 40694.67 00:23:00.460 00:23:00.717 Initializing NVMe Controllers 00:23:00.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:00.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:00.717 Initialization complete. Launching workers. 00:23:00.717 ======================================================== 00:23:00.717 Latency(us) 00:23:00.717 Device Information : IOPS MiB/s Average min max 00:23:00.717 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 24.00 0.09 41704.33 40276.44 41957.50 00:23:00.717 ======================================================== 00:23:00.717 Total : 24.00 0.09 41704.33 40276.44 41957.50 00:23:00.717 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1837589 00:23:00.717 Initializing NVMe Controllers 00:23:00.717 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:00.717 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:00.717 Initialization complete. Launching workers. 00:23:00.717 ======================================================== 00:23:00.717 Latency(us) 00:23:00.717 Device Information : IOPS MiB/s Average min max 00:23:00.717 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40873.10 40245.93 40963.17 00:23:00.717 ======================================================== 00:23:00.717 Total : 25.00 0.10 40873.10 40245.93 40963.17 00:23:00.717 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1837590 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:00.717 rmmod nvme_tcp 00:23:00.717 rmmod nvme_fabrics 00:23:00.717 rmmod nvme_keyring 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@515 -- # '[' -n 1837511 ']' 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # killprocess 1837511 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 1837511 ']' 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 1837511 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1837511 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1837511' 00:23:00.717 killing process with pid 1837511 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 1837511 00:23:00.717 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 1837511 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-save 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@789 -- # iptables-restore 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:00.976 13:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:03.513 00:23:03.513 real 0m6.346s 00:23:03.513 user 0m5.732s 00:23:03.513 sys 0m2.581s 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:03.513 ************************************ 00:23:03.513 END TEST nvmf_control_msg_list 00:23:03.513 ************************************ 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:03.513 ************************************ 00:23:03.513 START TEST nvmf_wait_for_buf 00:23:03.513 ************************************ 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:03.513 * Looking for test storage... 00:23:03.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:03.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.513 --rc genhtml_branch_coverage=1 00:23:03.513 --rc genhtml_function_coverage=1 00:23:03.513 --rc genhtml_legend=1 00:23:03.513 --rc geninfo_all_blocks=1 00:23:03.513 --rc geninfo_unexecuted_blocks=1 00:23:03.513 00:23:03.513 ' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:03.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.513 --rc genhtml_branch_coverage=1 00:23:03.513 --rc genhtml_function_coverage=1 00:23:03.513 --rc genhtml_legend=1 00:23:03.513 --rc geninfo_all_blocks=1 00:23:03.513 --rc geninfo_unexecuted_blocks=1 00:23:03.513 00:23:03.513 ' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:03.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.513 --rc genhtml_branch_coverage=1 00:23:03.513 --rc genhtml_function_coverage=1 00:23:03.513 --rc genhtml_legend=1 00:23:03.513 --rc geninfo_all_blocks=1 00:23:03.513 --rc geninfo_unexecuted_blocks=1 00:23:03.513 00:23:03.513 ' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:03.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:03.513 --rc genhtml_branch_coverage=1 00:23:03.513 --rc genhtml_function_coverage=1 00:23:03.513 --rc genhtml_legend=1 00:23:03.513 --rc geninfo_all_blocks=1 00:23:03.513 --rc geninfo_unexecuted_blocks=1 00:23:03.513 00:23:03.513 ' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.513 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:03.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:03.514 13:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:04.895 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:04.895 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:04.895 Found net devices under 0000:09:00.0: cvl_0_0 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:04.895 Found net devices under 0000:09:00.1: cvl_0_1 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # is_hw=yes 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:04.895 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:05.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:23:05.187 00:23:05.187 --- 10.0.0.2 ping statistics --- 00:23:05.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.187 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:23:05.187 00:23:05.187 --- 10.0.0.1 ping statistics --- 00:23:05.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.187 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@448 -- # return 0 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # nvmfpid=1839558 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # waitforlisten 1839558 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 1839558 ']' 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.187 13:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.187 [2024-10-07 13:33:46.791290] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:05.187 [2024-10-07 13:33:46.791377] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.187 [2024-10-07 13:33:46.851496] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.470 [2024-10-07 13:33:46.953856] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.470 [2024-10-07 13:33:46.953922] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.470 [2024-10-07 13:33:46.953946] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.470 [2024-10-07 13:33:46.953957] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.470 [2024-10-07 13:33:46.953966] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.470 [2024-10-07 13:33:46.954525] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.470 Malloc0 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.470 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.471 [2024-10-07 13:33:47.153367] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:05.471 [2024-10-07 13:33:47.177585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.471 13:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:05.729 [2024-10-07 13:33:47.243776] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:07.102 Initializing NVMe Controllers 00:23:07.102 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:07.102 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:07.102 Initialization complete. Launching workers. 00:23:07.102 ======================================================== 00:23:07.102 Latency(us) 00:23:07.102 Device Information : IOPS MiB/s Average min max 00:23:07.102 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32323.22 9995.48 64853.25 00:23:07.102 ======================================================== 00:23:07.102 Total : 129.00 16.12 32323.22 9995.48 64853.25 00:23:07.102 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:07.102 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.103 rmmod nvme_tcp 00:23:07.103 rmmod nvme_fabrics 00:23:07.103 rmmod nvme_keyring 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@515 -- # '[' -n 1839558 ']' 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # killprocess 1839558 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 1839558 ']' 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 1839558 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1839558 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1839558' 00:23:07.103 killing process with pid 1839558 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 1839558 00:23:07.103 13:33:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 1839558 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-save 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@789 -- # iptables-restore 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.362 13:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.905 00:23:09.905 real 0m6.376s 00:23:09.905 user 0m3.002s 00:23:09.905 sys 0m1.810s 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:09.905 ************************************ 00:23:09.905 END TEST nvmf_wait_for_buf 00:23:09.905 ************************************ 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:09.905 13:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.808 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:11.809 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:11.809 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:11.809 Found net devices under 0000:09:00.0: cvl_0_0 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:11.809 Found net devices under 0000:09:00.1: cvl_0_1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:11.809 ************************************ 00:23:11.809 START TEST nvmf_perf_adq 00:23:11.809 ************************************ 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:11.809 * Looking for test storage... 00:23:11.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lcov --version 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.809 --rc genhtml_branch_coverage=1 00:23:11.809 --rc genhtml_function_coverage=1 00:23:11.809 --rc genhtml_legend=1 00:23:11.809 --rc geninfo_all_blocks=1 00:23:11.809 --rc geninfo_unexecuted_blocks=1 00:23:11.809 00:23:11.809 ' 00:23:11.809 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:11.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.809 --rc genhtml_branch_coverage=1 00:23:11.809 --rc genhtml_function_coverage=1 00:23:11.809 --rc genhtml_legend=1 00:23:11.810 --rc geninfo_all_blocks=1 00:23:11.810 --rc geninfo_unexecuted_blocks=1 00:23:11.810 00:23:11.810 ' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.810 --rc genhtml_branch_coverage=1 00:23:11.810 --rc genhtml_function_coverage=1 00:23:11.810 --rc genhtml_legend=1 00:23:11.810 --rc geninfo_all_blocks=1 00:23:11.810 --rc geninfo_unexecuted_blocks=1 00:23:11.810 00:23:11.810 ' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:11.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.810 --rc genhtml_branch_coverage=1 00:23:11.810 --rc genhtml_function_coverage=1 00:23:11.810 --rc genhtml_legend=1 00:23:11.810 --rc geninfo_all_blocks=1 00:23:11.810 --rc geninfo_unexecuted_blocks=1 00:23:11.810 00:23:11.810 ' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:11.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:11.810 13:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:13.714 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:13.714 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:13.714 Found net devices under 0000:09:00.0: cvl_0_0 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:13.714 Found net devices under 0000:09:00.1: cvl_0_1 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:13.714 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:13.715 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:14.285 13:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:16.815 13:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.094 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:22.095 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:22.095 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:22.095 Found net devices under 0000:09:00.0: cvl_0_0 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:22.095 Found net devices under 0000:09:00.1: cvl_0_1 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.095 13:34:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:22.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.725 ms 00:23:22.095 00:23:22.095 --- 10.0.0.2 ping statistics --- 00:23:22.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.095 rtt min/avg/max/mdev = 0.725/0.725/0.725/0.000 ms 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:23:22.095 00:23:22.095 --- 10.0.0.1 ping statistics --- 00:23:22.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.095 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1844113 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1844113 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1844113 ']' 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:22.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:22.095 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 [2024-10-07 13:34:03.177541] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:22.096 [2024-10-07 13:34:03.177630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:22.096 [2024-10-07 13:34:03.244484] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:22.096 [2024-10-07 13:34:03.353227] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:22.096 [2024-10-07 13:34:03.353285] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:22.096 [2024-10-07 13:34:03.353312] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.096 [2024-10-07 13:34:03.353322] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.096 [2024-10-07 13:34:03.353331] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:22.096 [2024-10-07 13:34:03.354979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:22.096 [2024-10-07 13:34:03.355020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.096 [2024-10-07 13:34:03.355076] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:22.096 [2024-10-07 13:34:03.355079] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 [2024-10-07 13:34:03.576139] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 Malloc1 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:22.096 [2024-10-07 13:34:03.626426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1844257 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:22.096 13:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:23.996 "tick_rate": 2700000000, 00:23:23.996 "poll_groups": [ 00:23:23.996 { 00:23:23.996 "name": "nvmf_tgt_poll_group_000", 00:23:23.996 "admin_qpairs": 1, 00:23:23.996 "io_qpairs": 1, 00:23:23.996 "current_admin_qpairs": 1, 00:23:23.996 "current_io_qpairs": 1, 00:23:23.996 "pending_bdev_io": 0, 00:23:23.996 "completed_nvme_io": 19539, 00:23:23.996 "transports": [ 00:23:23.996 { 00:23:23.996 "trtype": "TCP" 00:23:23.996 } 00:23:23.996 ] 00:23:23.996 }, 00:23:23.996 { 00:23:23.996 "name": "nvmf_tgt_poll_group_001", 00:23:23.996 "admin_qpairs": 0, 00:23:23.996 "io_qpairs": 1, 00:23:23.996 "current_admin_qpairs": 0, 00:23:23.996 "current_io_qpairs": 1, 00:23:23.996 "pending_bdev_io": 0, 00:23:23.996 "completed_nvme_io": 19625, 00:23:23.996 "transports": [ 00:23:23.996 { 00:23:23.996 "trtype": "TCP" 00:23:23.996 } 00:23:23.996 ] 00:23:23.996 }, 00:23:23.996 { 00:23:23.996 "name": "nvmf_tgt_poll_group_002", 00:23:23.996 "admin_qpairs": 0, 00:23:23.996 "io_qpairs": 1, 00:23:23.996 "current_admin_qpairs": 0, 00:23:23.996 "current_io_qpairs": 1, 00:23:23.996 "pending_bdev_io": 0, 00:23:23.996 "completed_nvme_io": 19876, 00:23:23.996 "transports": [ 00:23:23.996 { 00:23:23.996 "trtype": "TCP" 00:23:23.996 } 00:23:23.996 ] 00:23:23.996 }, 00:23:23.996 { 00:23:23.996 "name": "nvmf_tgt_poll_group_003", 00:23:23.996 "admin_qpairs": 0, 00:23:23.996 "io_qpairs": 1, 00:23:23.996 "current_admin_qpairs": 0, 00:23:23.996 "current_io_qpairs": 1, 00:23:23.996 "pending_bdev_io": 0, 00:23:23.996 "completed_nvme_io": 19062, 00:23:23.996 "transports": [ 00:23:23.996 { 00:23:23.996 "trtype": "TCP" 00:23:23.996 } 00:23:23.996 ] 00:23:23.996 } 00:23:23.996 ] 00:23:23.996 }' 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:23.996 13:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1844257 00:23:32.108 Initializing NVMe Controllers 00:23:32.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:32.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:32.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:32.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:32.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:32.108 Initialization complete. Launching workers. 00:23:32.108 ======================================================== 00:23:32.108 Latency(us) 00:23:32.108 Device Information : IOPS MiB/s Average min max 00:23:32.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10347.90 40.42 6186.72 2462.50 10434.25 00:23:32.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10276.70 40.14 6228.51 2402.40 10248.65 00:23:32.108 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9987.90 39.02 6409.55 2296.46 10803.65 00:23:32.109 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10229.70 39.96 6256.03 2282.78 10602.21 00:23:32.109 ======================================================== 00:23:32.109 Total : 40842.20 159.54 6269.09 2282.78 10803.65 00:23:32.109 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:32.109 rmmod nvme_tcp 00:23:32.109 rmmod nvme_fabrics 00:23:32.109 rmmod nvme_keyring 00:23:32.109 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1844113 ']' 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1844113 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1844113 ']' 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1844113 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1844113 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1844113' 00:23:32.367 killing process with pid 1844113 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1844113 00:23:32.367 13:34:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1844113 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.627 13:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.529 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.529 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:34.529 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:34.529 13:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:35.465 13:34:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:37.412 13:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:42.737 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:42.737 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:42.737 Found net devices under 0000:09:00.0: cvl_0_0 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:42.737 Found net devices under 0000:09:00.1: cvl_0_1 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:42.737 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # is_hw=yes 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:42.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:42.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:23:42.738 00:23:42.738 --- 10.0.0.2 ping statistics --- 00:23:42.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.738 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:42.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:42.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:23:42.738 00:23:42.738 --- 10.0.0.1 ping statistics --- 00:23:42.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:42.738 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # return 0 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:42.738 net.core.busy_poll = 1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:42.738 net.core.busy_read = 1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # nvmfpid=1846850 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # waitforlisten 1846850 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 1846850 ']' 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:42.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:42.738 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.738 [2024-10-07 13:34:24.426982] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:42.738 [2024-10-07 13:34:24.427087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:42.995 [2024-10-07 13:34:24.492200] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:42.995 [2024-10-07 13:34:24.603371] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:42.995 [2024-10-07 13:34:24.603454] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:42.995 [2024-10-07 13:34:24.603468] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:42.995 [2024-10-07 13:34:24.603478] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:42.995 [2024-10-07 13:34:24.603488] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:42.995 [2024-10-07 13:34:24.605144] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.995 [2024-10-07 13:34:24.605208] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:42.995 [2024-10-07 13:34:24.605276] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:42.995 [2024-10-07 13:34:24.605279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.995 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 [2024-10-07 13:34:24.861746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 Malloc1 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:43.253 [2024-10-07 13:34:24.914915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1846993 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:43.253 13:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:45.782 "tick_rate": 2700000000, 00:23:45.782 "poll_groups": [ 00:23:45.782 { 00:23:45.782 "name": "nvmf_tgt_poll_group_000", 00:23:45.782 "admin_qpairs": 1, 00:23:45.782 "io_qpairs": 2, 00:23:45.782 "current_admin_qpairs": 1, 00:23:45.782 "current_io_qpairs": 2, 00:23:45.782 "pending_bdev_io": 0, 00:23:45.782 "completed_nvme_io": 25921, 00:23:45.782 "transports": [ 00:23:45.782 { 00:23:45.782 "trtype": "TCP" 00:23:45.782 } 00:23:45.782 ] 00:23:45.782 }, 00:23:45.782 { 00:23:45.782 "name": "nvmf_tgt_poll_group_001", 00:23:45.782 "admin_qpairs": 0, 00:23:45.782 "io_qpairs": 2, 00:23:45.782 "current_admin_qpairs": 0, 00:23:45.782 "current_io_qpairs": 2, 00:23:45.782 "pending_bdev_io": 0, 00:23:45.782 "completed_nvme_io": 25532, 00:23:45.782 "transports": [ 00:23:45.782 { 00:23:45.782 "trtype": "TCP" 00:23:45.782 } 00:23:45.782 ] 00:23:45.782 }, 00:23:45.782 { 00:23:45.782 "name": "nvmf_tgt_poll_group_002", 00:23:45.782 "admin_qpairs": 0, 00:23:45.782 "io_qpairs": 0, 00:23:45.782 "current_admin_qpairs": 0, 00:23:45.782 "current_io_qpairs": 0, 00:23:45.782 "pending_bdev_io": 0, 00:23:45.782 "completed_nvme_io": 0, 00:23:45.782 "transports": [ 00:23:45.782 { 00:23:45.782 "trtype": "TCP" 00:23:45.782 } 00:23:45.782 ] 00:23:45.782 }, 00:23:45.782 { 00:23:45.782 "name": "nvmf_tgt_poll_group_003", 00:23:45.782 "admin_qpairs": 0, 00:23:45.782 "io_qpairs": 0, 00:23:45.782 "current_admin_qpairs": 0, 00:23:45.782 "current_io_qpairs": 0, 00:23:45.782 "pending_bdev_io": 0, 00:23:45.782 "completed_nvme_io": 0, 00:23:45.782 "transports": [ 00:23:45.782 { 00:23:45.782 "trtype": "TCP" 00:23:45.782 } 00:23:45.782 ] 00:23:45.782 } 00:23:45.782 ] 00:23:45.782 }' 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:45.782 13:34:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1846993 00:23:53.893 Initializing NVMe Controllers 00:23:53.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:53.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:53.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:53.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:53.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:53.893 Initialization complete. Launching workers. 00:23:53.893 ======================================================== 00:23:53.893 Latency(us) 00:23:53.893 Device Information : IOPS MiB/s Average min max 00:23:53.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5764.50 22.52 11123.38 2079.72 53984.45 00:23:53.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5323.50 20.79 12027.91 1990.95 53633.82 00:23:53.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7567.10 29.56 8458.02 1849.09 54696.54 00:23:53.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8082.00 31.57 7922.09 1414.73 54365.74 00:23:53.893 ======================================================== 00:23:53.893 Total : 26737.09 104.44 9581.46 1414.73 54696.54 00:23:53.893 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@514 -- # nvmfcleanup 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:53.893 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.894 rmmod nvme_tcp 00:23:53.894 rmmod nvme_fabrics 00:23:53.894 rmmod nvme_keyring 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@515 -- # '[' -n 1846850 ']' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # killprocess 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 1846850 ']' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1846850' 00:23:53.894 killing process with pid 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 1846850 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-save 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@789 -- # iptables-restore 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.894 13:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:57.182 00:23:57.182 real 0m45.366s 00:23:57.182 user 2m40.741s 00:23:57.182 sys 0m8.753s 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:57.182 ************************************ 00:23:57.182 END TEST nvmf_perf_adq 00:23:57.182 ************************************ 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.182 13:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:57.182 ************************************ 00:23:57.182 START TEST nvmf_shutdown 00:23:57.182 ************************************ 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:57.183 * Looking for test storage... 00:23:57.183 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.183 --rc genhtml_branch_coverage=1 00:23:57.183 --rc genhtml_function_coverage=1 00:23:57.183 --rc genhtml_legend=1 00:23:57.183 --rc geninfo_all_blocks=1 00:23:57.183 --rc geninfo_unexecuted_blocks=1 00:23:57.183 00:23:57.183 ' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.183 --rc genhtml_branch_coverage=1 00:23:57.183 --rc genhtml_function_coverage=1 00:23:57.183 --rc genhtml_legend=1 00:23:57.183 --rc geninfo_all_blocks=1 00:23:57.183 --rc geninfo_unexecuted_blocks=1 00:23:57.183 00:23:57.183 ' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.183 --rc genhtml_branch_coverage=1 00:23:57.183 --rc genhtml_function_coverage=1 00:23:57.183 --rc genhtml_legend=1 00:23:57.183 --rc geninfo_all_blocks=1 00:23:57.183 --rc geninfo_unexecuted_blocks=1 00:23:57.183 00:23:57.183 ' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:57.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:57.183 --rc genhtml_branch_coverage=1 00:23:57.183 --rc genhtml_function_coverage=1 00:23:57.183 --rc genhtml_legend=1 00:23:57.183 --rc geninfo_all_blocks=1 00:23:57.183 --rc geninfo_unexecuted_blocks=1 00:23:57.183 00:23:57.183 ' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:57.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:57.183 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:57.184 ************************************ 00:23:57.184 START TEST nvmf_shutdown_tc1 00:23:57.184 ************************************ 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # prepare_net_devs 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:57.184 13:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:23:59.713 Found 0000:09:00.0 (0x8086 - 0x1592) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:23:59.713 Found 0000:09:00.1 (0x8086 - 0x1592) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:59.713 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:59.714 Found net devices under 0000:09:00.0: cvl_0_0 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:59.714 Found net devices under 0000:09:00.1: cvl_0_1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # is_hw=yes 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:59.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:23:59.714 00:23:59.714 --- 10.0.0.2 ping statistics --- 00:23:59.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.714 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:59.714 00:23:59.714 --- 10.0.0.1 ping statistics --- 00:23:59.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.714 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # return 0 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # nvmfpid=1850165 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # waitforlisten 1850165 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1850165 ']' 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:59.714 13:34:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 [2024-10-07 13:34:41.043099] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:23:59.714 [2024-10-07 13:34:41.043168] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.714 [2024-10-07 13:34:41.103306] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.714 [2024-10-07 13:34:41.209493] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.714 [2024-10-07 13:34:41.209561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.714 [2024-10-07 13:34:41.209584] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.714 [2024-10-07 13:34:41.209595] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.714 [2024-10-07 13:34:41.209604] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.714 [2024-10-07 13:34:41.211289] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.714 [2024-10-07 13:34:41.211315] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.714 [2024-10-07 13:34:41.211375] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:23:59.714 [2024-10-07 13:34:41.211379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 [2024-10-07 13:34:41.374172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.714 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.715 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:59.972 Malloc1 00:23:59.972 [2024-10-07 13:34:41.467763] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.972 Malloc2 00:23:59.972 Malloc3 00:23:59.972 Malloc4 00:23:59.972 Malloc5 00:23:59.972 Malloc6 00:24:00.242 Malloc7 00:24:00.242 Malloc8 00:24:00.242 Malloc9 00:24:00.242 Malloc10 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1850338 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1850338 /var/tmp/bdevperf.sock 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 1850338 ']' 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.242 { 00:24:00.242 "params": { 00:24:00.242 "name": "Nvme$subsystem", 00:24:00.242 "trtype": "$TEST_TRANSPORT", 00:24:00.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.242 "adrfam": "ipv4", 00:24:00.242 "trsvcid": "$NVMF_PORT", 00:24:00.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.242 "hdgst": ${hdgst:-false}, 00:24:00.242 "ddgst": ${ddgst:-false} 00:24:00.242 }, 00:24:00.242 "method": "bdev_nvme_attach_controller" 00:24:00.242 } 00:24:00.242 EOF 00:24:00.242 )") 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.242 { 00:24:00.242 "params": { 00:24:00.242 "name": "Nvme$subsystem", 00:24:00.242 "trtype": "$TEST_TRANSPORT", 00:24:00.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.242 "adrfam": "ipv4", 00:24:00.242 "trsvcid": "$NVMF_PORT", 00:24:00.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.242 "hdgst": ${hdgst:-false}, 00:24:00.242 "ddgst": ${ddgst:-false} 00:24:00.242 }, 00:24:00.242 "method": "bdev_nvme_attach_controller" 00:24:00.242 } 00:24:00.242 EOF 00:24:00.242 )") 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.242 { 00:24:00.242 "params": { 00:24:00.242 "name": "Nvme$subsystem", 00:24:00.242 "trtype": "$TEST_TRANSPORT", 00:24:00.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.242 "adrfam": "ipv4", 00:24:00.242 "trsvcid": "$NVMF_PORT", 00:24:00.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.242 "hdgst": ${hdgst:-false}, 00:24:00.242 "ddgst": ${ddgst:-false} 00:24:00.242 }, 00:24:00.242 "method": "bdev_nvme_attach_controller" 00:24:00.242 } 00:24:00.242 EOF 00:24:00.242 )") 00:24:00.242 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.500 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.500 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.500 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:00.501 { 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme$subsystem", 00:24:00.501 "trtype": "$TEST_TRANSPORT", 00:24:00.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "$NVMF_PORT", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.501 "hdgst": ${hdgst:-false}, 00:24:00.501 "ddgst": ${ddgst:-false} 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 } 00:24:00.501 EOF 00:24:00.501 )") 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:24:00.501 13:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme1", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme2", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme3", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme4", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme5", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme6", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme7", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:00.501 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:00.501 "hdgst": false, 00:24:00.501 "ddgst": false 00:24:00.501 }, 00:24:00.501 "method": "bdev_nvme_attach_controller" 00:24:00.501 },{ 00:24:00.501 "params": { 00:24:00.501 "name": "Nvme8", 00:24:00.501 "trtype": "tcp", 00:24:00.501 "traddr": "10.0.0.2", 00:24:00.501 "adrfam": "ipv4", 00:24:00.501 "trsvcid": "4420", 00:24:00.501 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:00.502 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:00.502 "hdgst": false, 00:24:00.502 "ddgst": false 00:24:00.502 }, 00:24:00.502 "method": "bdev_nvme_attach_controller" 00:24:00.502 },{ 00:24:00.502 "params": { 00:24:00.502 "name": "Nvme9", 00:24:00.502 "trtype": "tcp", 00:24:00.502 "traddr": "10.0.0.2", 00:24:00.502 "adrfam": "ipv4", 00:24:00.502 "trsvcid": "4420", 00:24:00.502 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:00.502 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:00.502 "hdgst": false, 00:24:00.502 "ddgst": false 00:24:00.502 }, 00:24:00.502 "method": "bdev_nvme_attach_controller" 00:24:00.502 },{ 00:24:00.502 "params": { 00:24:00.502 "name": "Nvme10", 00:24:00.502 "trtype": "tcp", 00:24:00.502 "traddr": "10.0.0.2", 00:24:00.502 "adrfam": "ipv4", 00:24:00.502 "trsvcid": "4420", 00:24:00.502 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:00.502 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:00.502 "hdgst": false, 00:24:00.502 "ddgst": false 00:24:00.502 }, 00:24:00.502 "method": "bdev_nvme_attach_controller" 00:24:00.502 }' 00:24:00.502 [2024-10-07 13:34:41.990561] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:00.502 [2024-10-07 13:34:41.990648] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:00.502 [2024-10-07 13:34:42.050305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.502 [2024-10-07 13:34:42.160732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1850338 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:02.398 13:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:03.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1850338 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1850165 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # config=() 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # local subsystem config 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.328 { 00:24:03.328 "params": { 00:24:03.328 "name": "Nvme$subsystem", 00:24:03.328 "trtype": "$TEST_TRANSPORT", 00:24:03.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.328 "adrfam": "ipv4", 00:24:03.328 "trsvcid": "$NVMF_PORT", 00:24:03.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.328 "hdgst": ${hdgst:-false}, 00:24:03.328 "ddgst": ${ddgst:-false} 00:24:03.328 }, 00:24:03.328 "method": "bdev_nvme_attach_controller" 00:24:03.328 } 00:24:03.328 EOF 00:24:03.328 )") 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.328 { 00:24:03.328 "params": { 00:24:03.328 "name": "Nvme$subsystem", 00:24:03.328 "trtype": "$TEST_TRANSPORT", 00:24:03.328 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.328 "adrfam": "ipv4", 00:24:03.328 "trsvcid": "$NVMF_PORT", 00:24:03.328 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.328 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.328 "hdgst": ${hdgst:-false}, 00:24:03.328 "ddgst": ${ddgst:-false} 00:24:03.328 }, 00:24:03.328 "method": "bdev_nvme_attach_controller" 00:24:03.328 } 00:24:03.328 EOF 00:24:03.328 )") 00:24:03.328 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.587 { 00:24:03.587 "params": { 00:24:03.587 "name": "Nvme$subsystem", 00:24:03.587 "trtype": "$TEST_TRANSPORT", 00:24:03.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.587 "adrfam": "ipv4", 00:24:03.587 "trsvcid": "$NVMF_PORT", 00:24:03.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.587 "hdgst": ${hdgst:-false}, 00:24:03.587 "ddgst": ${ddgst:-false} 00:24:03.587 }, 00:24:03.587 "method": "bdev_nvme_attach_controller" 00:24:03.587 } 00:24:03.587 EOF 00:24:03.587 )") 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.587 { 00:24:03.587 "params": { 00:24:03.587 "name": "Nvme$subsystem", 00:24:03.587 "trtype": "$TEST_TRANSPORT", 00:24:03.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.587 "adrfam": "ipv4", 00:24:03.587 "trsvcid": "$NVMF_PORT", 00:24:03.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.587 "hdgst": ${hdgst:-false}, 00:24:03.587 "ddgst": ${ddgst:-false} 00:24:03.587 }, 00:24:03.587 "method": "bdev_nvme_attach_controller" 00:24:03.587 } 00:24:03.587 EOF 00:24:03.587 )") 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.587 { 00:24:03.587 "params": { 00:24:03.587 "name": "Nvme$subsystem", 00:24:03.587 "trtype": "$TEST_TRANSPORT", 00:24:03.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.587 "adrfam": "ipv4", 00:24:03.587 "trsvcid": "$NVMF_PORT", 00:24:03.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.587 "hdgst": ${hdgst:-false}, 00:24:03.587 "ddgst": ${ddgst:-false} 00:24:03.587 }, 00:24:03.587 "method": "bdev_nvme_attach_controller" 00:24:03.587 } 00:24:03.587 EOF 00:24:03.587 )") 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.587 { 00:24:03.587 "params": { 00:24:03.587 "name": "Nvme$subsystem", 00:24:03.587 "trtype": "$TEST_TRANSPORT", 00:24:03.587 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.587 "adrfam": "ipv4", 00:24:03.587 "trsvcid": "$NVMF_PORT", 00:24:03.587 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.587 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.587 "hdgst": ${hdgst:-false}, 00:24:03.587 "ddgst": ${ddgst:-false} 00:24:03.587 }, 00:24:03.587 "method": "bdev_nvme_attach_controller" 00:24:03.587 } 00:24:03.587 EOF 00:24:03.587 )") 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.587 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.587 { 00:24:03.587 "params": { 00:24:03.587 "name": "Nvme$subsystem", 00:24:03.587 "trtype": "$TEST_TRANSPORT", 00:24:03.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "$NVMF_PORT", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.588 "hdgst": ${hdgst:-false}, 00:24:03.588 "ddgst": ${ddgst:-false} 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 } 00:24:03.588 EOF 00:24:03.588 )") 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.588 { 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme$subsystem", 00:24:03.588 "trtype": "$TEST_TRANSPORT", 00:24:03.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "$NVMF_PORT", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.588 "hdgst": ${hdgst:-false}, 00:24:03.588 "ddgst": ${ddgst:-false} 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 } 00:24:03.588 EOF 00:24:03.588 )") 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.588 { 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme$subsystem", 00:24:03.588 "trtype": "$TEST_TRANSPORT", 00:24:03.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "$NVMF_PORT", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.588 "hdgst": ${hdgst:-false}, 00:24:03.588 "ddgst": ${ddgst:-false} 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 } 00:24:03.588 EOF 00:24:03.588 )") 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:03.588 { 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme$subsystem", 00:24:03.588 "trtype": "$TEST_TRANSPORT", 00:24:03.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "$NVMF_PORT", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:03.588 "hdgst": ${hdgst:-false}, 00:24:03.588 "ddgst": ${ddgst:-false} 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 } 00:24:03.588 EOF 00:24:03.588 )") 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@580 -- # cat 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # jq . 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@583 -- # IFS=, 00:24:03.588 13:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme1", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme2", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme3", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme4", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme5", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme6", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme7", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme8", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme9", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 },{ 00:24:03.588 "params": { 00:24:03.588 "name": "Nvme10", 00:24:03.588 "trtype": "tcp", 00:24:03.588 "traddr": "10.0.0.2", 00:24:03.588 "adrfam": "ipv4", 00:24:03.588 "trsvcid": "4420", 00:24:03.588 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:03.588 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:03.588 "hdgst": false, 00:24:03.588 "ddgst": false 00:24:03.588 }, 00:24:03.588 "method": "bdev_nvme_attach_controller" 00:24:03.588 }' 00:24:03.588 [2024-10-07 13:34:45.081594] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:03.588 [2024-10-07 13:34:45.081708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1850709 ] 00:24:03.588 [2024-10-07 13:34:45.142919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.588 [2024-10-07 13:34:45.257302] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.485 Running I/O for 1 seconds... 00:24:06.417 1737.00 IOPS, 108.56 MiB/s 00:24:06.417 Latency(us) 00:24:06.417 [2024-10-07T11:34:48.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.417 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme1n1 : 1.13 226.10 14.13 0.00 0.00 280292.31 20000.62 254765.13 00:24:06.417 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme2n1 : 1.15 226.61 14.16 0.00 0.00 274112.63 1626.26 256318.58 00:24:06.417 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme3n1 : 1.11 231.65 14.48 0.00 0.00 264184.79 18835.53 257872.02 00:24:06.417 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme4n1 : 1.10 237.03 14.81 0.00 0.00 252052.41 5242.88 270299.59 00:24:06.417 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme5n1 : 1.16 219.90 13.74 0.00 0.00 269959.77 39030.33 268746.15 00:24:06.417 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme6n1 : 1.17 218.16 13.64 0.00 0.00 267624.30 22622.06 285834.05 00:24:06.417 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme7n1 : 1.15 222.59 13.91 0.00 0.00 257040.12 20194.80 260978.92 00:24:06.417 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme8n1 : 1.18 271.80 16.99 0.00 0.00 207754.13 17767.54 256318.58 00:24:06.417 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme9n1 : 1.16 225.73 14.11 0.00 0.00 244826.06 3495.25 260978.92 00:24:06.417 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:06.417 Verification LBA range: start 0x0 length 0x400 00:24:06.417 Nvme10n1 : 1.17 218.89 13.68 0.00 0.00 248938.00 22622.06 265639.25 00:24:06.417 [2024-10-07T11:34:48.129Z] =================================================================================================================== 00:24:06.417 [2024-10-07T11:34:48.129Z] Total : 2298.46 143.65 0.00 0.00 255486.71 1626.26 285834.05 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:06.675 rmmod nvme_tcp 00:24:06.675 rmmod nvme_fabrics 00:24:06.675 rmmod nvme_keyring 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@515 -- # '[' -n 1850165 ']' 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # killprocess 1850165 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 1850165 ']' 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 1850165 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1850165 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1850165' 00:24:06.675 killing process with pid 1850165 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 1850165 00:24:06.675 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 1850165 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-save 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@789 -- # iptables-restore 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.240 13:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:09.775 00:24:09.775 real 0m12.190s 00:24:09.775 user 0m35.625s 00:24:09.775 sys 0m3.265s 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.775 ************************************ 00:24:09.775 END TEST nvmf_shutdown_tc1 00:24:09.775 ************************************ 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:09.775 13:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:09.775 ************************************ 00:24:09.775 START TEST nvmf_shutdown_tc2 00:24:09.775 ************************************ 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:09.775 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.775 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:09.776 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:09.776 Found net devices under 0000:09:00.0: cvl_0_0 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:09.776 Found net devices under 0000:09:00.1: cvl_0_1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:24:09.776 00:24:09.776 --- 10.0.0.2 ping statistics --- 00:24:09.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.776 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:24:09.776 00:24:09.776 --- 10.0.0.1 ping statistics --- 00:24:09.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.776 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # return 0 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # nvmfpid=1851482 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # waitforlisten 1851482 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1851482 ']' 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:09.776 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:09.776 [2024-10-07 13:34:51.247236] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:09.776 [2024-10-07 13:34:51.247314] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.776 [2024-10-07 13:34:51.313264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.776 [2024-10-07 13:34:51.418115] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.776 [2024-10-07 13:34:51.418173] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.776 [2024-10-07 13:34:51.418186] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.776 [2024-10-07 13:34:51.418197] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.776 [2024-10-07 13:34:51.418207] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.776 [2024-10-07 13:34:51.419605] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.776 [2024-10-07 13:34:51.419662] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.776 [2024-10-07 13:34:51.419794] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:09.776 [2024-10-07 13:34:51.419798] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 [2024-10-07 13:34:51.559338] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:10.035 13:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.035 Malloc1 00:24:10.035 [2024-10-07 13:34:51.633863] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.035 Malloc2 00:24:10.035 Malloc3 00:24:10.293 Malloc4 00:24:10.293 Malloc5 00:24:10.293 Malloc6 00:24:10.293 Malloc7 00:24:10.293 Malloc8 00:24:10.293 Malloc9 00:24:10.551 Malloc10 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1851659 00:24:10.551 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1851659 /var/tmp/bdevperf.sock 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 1851659 ']' 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # config=() 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # local subsystem config 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.552 "method": "bdev_nvme_attach_controller" 00:24:10.552 } 00:24:10.552 EOF 00:24:10.552 )") 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:10.552 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:10.552 { 00:24:10.552 "params": { 00:24:10.552 "name": "Nvme$subsystem", 00:24:10.552 "trtype": "$TEST_TRANSPORT", 00:24:10.552 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.552 "adrfam": "ipv4", 00:24:10.552 "trsvcid": "$NVMF_PORT", 00:24:10.552 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.552 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.552 "hdgst": ${hdgst:-false}, 00:24:10.552 "ddgst": ${ddgst:-false} 00:24:10.552 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 } 00:24:10.553 EOF 00:24:10.553 )") 00:24:10.553 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@580 -- # cat 00:24:10.553 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # jq . 00:24:10.553 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@583 -- # IFS=, 00:24:10.553 13:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme1", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme2", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme3", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme4", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme5", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme6", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme7", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme8", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme9", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 },{ 00:24:10.553 "params": { 00:24:10.553 "name": "Nvme10", 00:24:10.553 "trtype": "tcp", 00:24:10.553 "traddr": "10.0.0.2", 00:24:10.553 "adrfam": "ipv4", 00:24:10.553 "trsvcid": "4420", 00:24:10.553 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:10.553 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:10.553 "hdgst": false, 00:24:10.553 "ddgst": false 00:24:10.553 }, 00:24:10.553 "method": "bdev_nvme_attach_controller" 00:24:10.553 }' 00:24:10.553 [2024-10-07 13:34:52.139050] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:10.553 [2024-10-07 13:34:52.139122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851659 ] 00:24:10.553 [2024-10-07 13:34:52.198221] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.811 [2024-10-07 13:34:52.308996] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.184 Running I/O for 10 seconds... 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:12.750 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1851659 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1851659 ']' 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1851659 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851659 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851659' 00:24:13.009 killing process with pid 1851659 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1851659 00:24:13.009 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1851659 00:24:13.267 Received shutdown signal, test time was about 0.891495 seconds 00:24:13.267 00:24:13.267 Latency(us) 00:24:13.267 [2024-10-07T11:34:54.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:13.267 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.267 Verification LBA range: start 0x0 length 0x400 00:24:13.267 Nvme1n1 : 0.84 229.35 14.33 0.00 0.00 275113.66 21456.97 251658.24 00:24:13.267 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.267 Verification LBA range: start 0x0 length 0x400 00:24:13.267 Nvme2n1 : 0.81 238.22 14.89 0.00 0.00 258760.00 30292.20 242337.56 00:24:13.267 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.267 Verification LBA range: start 0x0 length 0x400 00:24:13.267 Nvme3n1 : 0.81 236.68 14.79 0.00 0.00 253950.99 18252.99 253211.69 00:24:13.267 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.267 Verification LBA range: start 0x0 length 0x400 00:24:13.267 Nvme4n1 : 0.81 250.58 15.66 0.00 0.00 230137.14 12379.02 251658.24 00:24:13.267 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.267 Verification LBA range: start 0x0 length 0x400 00:24:13.267 Nvme5n1 : 0.85 226.58 14.16 0.00 0.00 254425.82 23204.60 253211.69 00:24:13.268 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.268 Verification LBA range: start 0x0 length 0x400 00:24:13.268 Nvme6n1 : 0.83 231.98 14.50 0.00 0.00 241784.35 31457.28 240784.12 00:24:13.268 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.268 Verification LBA range: start 0x0 length 0x400 00:24:13.268 Nvme7n1 : 0.82 233.29 14.58 0.00 0.00 234065.67 21165.70 251658.24 00:24:13.268 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.268 Verification LBA range: start 0x0 length 0x400 00:24:13.268 Nvme8n1 : 0.84 229.66 14.35 0.00 0.00 231986.32 20291.89 251658.24 00:24:13.268 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.268 Verification LBA range: start 0x0 length 0x400 00:24:13.268 Nvme9n1 : 0.89 215.58 13.47 0.00 0.00 232358.24 23010.42 288940.94 00:24:13.268 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:13.268 Verification LBA range: start 0x0 length 0x400 00:24:13.268 Nvme10n1 : 0.84 228.02 14.25 0.00 0.00 222782.20 20291.89 259425.47 00:24:13.268 [2024-10-07T11:34:54.980Z] =================================================================================================================== 00:24:13.268 [2024-10-07T11:34:54.980Z] Total : 2319.94 145.00 0.00 0.00 243453.21 12379.02 288940.94 00:24:13.525 13:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1851482 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:14.457 13:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.457 rmmod nvme_tcp 00:24:14.457 rmmod nvme_fabrics 00:24:14.457 rmmod nvme_keyring 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@515 -- # '[' -n 1851482 ']' 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # killprocess 1851482 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 1851482 ']' 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 1851482 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1851482 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1851482' 00:24:14.457 killing process with pid 1851482 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 1851482 00:24:14.457 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 1851482 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-save 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@789 -- # iptables-restore 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:15.023 13:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.943 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.943 00:24:16.943 real 0m7.638s 00:24:16.943 user 0m23.060s 00:24:16.943 sys 0m1.435s 00:24:16.943 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:16.943 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:16.943 ************************************ 00:24:16.943 END TEST nvmf_shutdown_tc2 00:24:16.943 ************************************ 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 ************************************ 00:24:17.202 START TEST nvmf_shutdown_tc3 00:24:17.202 ************************************ 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:17.202 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:17.202 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:17.202 Found net devices under 0000:09:00.0: cvl_0_0 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:17.202 Found net devices under 0000:09:00.1: cvl_0_1 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.202 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:17.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:17.203 00:24:17.203 --- 10.0.0.2 ping statistics --- 00:24:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.203 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:17.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:24:17.203 00:24:17.203 --- 10.0.0.1 ping statistics --- 00:24:17.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.203 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # return 0 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # nvmfpid=1852529 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # waitforlisten 1852529 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1852529 ']' 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.203 13:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.473 [2024-10-07 13:34:58.946931] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:17.473 [2024-10-07 13:34:58.947047] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.473 [2024-10-07 13:34:59.010589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.473 [2024-10-07 13:34:59.119873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.473 [2024-10-07 13:34:59.119938] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.473 [2024-10-07 13:34:59.119951] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.473 [2024-10-07 13:34:59.119962] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.473 [2024-10-07 13:34:59.119971] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.473 [2024-10-07 13:34:59.121514] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.473 [2024-10-07 13:34:59.121578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.473 [2024-10-07 13:34:59.121642] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.473 [2024-10-07 13:34:59.121644] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.731 [2024-10-07 13:34:59.286109] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.731 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.732 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:17.732 Malloc1 00:24:17.732 [2024-10-07 13:34:59.375185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:17.732 Malloc2 00:24:17.989 Malloc3 00:24:17.989 Malloc4 00:24:17.989 Malloc5 00:24:17.989 Malloc6 00:24:17.989 Malloc7 00:24:18.248 Malloc8 00:24:18.248 Malloc9 00:24:18.248 Malloc10 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1852704 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1852704 /var/tmp/bdevperf.sock 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 1852704 ']' 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # config=() 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # local subsystem config 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.248 "hdgst": ${hdgst:-false}, 00:24:18.248 "ddgst": ${ddgst:-false} 00:24:18.248 }, 00:24:18.248 "method": "bdev_nvme_attach_controller" 00:24:18.248 } 00:24:18.248 EOF 00:24:18.248 )") 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.248 "hdgst": ${hdgst:-false}, 00:24:18.248 "ddgst": ${ddgst:-false} 00:24:18.248 }, 00:24:18.248 "method": "bdev_nvme_attach_controller" 00:24:18.248 } 00:24:18.248 EOF 00:24:18.248 )") 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.248 "hdgst": ${hdgst:-false}, 00:24:18.248 "ddgst": ${ddgst:-false} 00:24:18.248 }, 00:24:18.248 "method": "bdev_nvme_attach_controller" 00:24:18.248 } 00:24:18.248 EOF 00:24:18.248 )") 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.248 "hdgst": ${hdgst:-false}, 00:24:18.248 "ddgst": ${ddgst:-false} 00:24:18.248 }, 00:24:18.248 "method": "bdev_nvme_attach_controller" 00:24:18.248 } 00:24:18.248 EOF 00:24:18.248 )") 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.248 "hdgst": ${hdgst:-false}, 00:24:18.248 "ddgst": ${ddgst:-false} 00:24:18.248 }, 00:24:18.248 "method": "bdev_nvme_attach_controller" 00:24:18.248 } 00:24:18.248 EOF 00:24:18.248 )") 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.248 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.248 { 00:24:18.248 "params": { 00:24:18.248 "name": "Nvme$subsystem", 00:24:18.248 "trtype": "$TEST_TRANSPORT", 00:24:18.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.248 "adrfam": "ipv4", 00:24:18.248 "trsvcid": "$NVMF_PORT", 00:24:18.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.249 "hdgst": ${hdgst:-false}, 00:24:18.249 "ddgst": ${ddgst:-false} 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 } 00:24:18.249 EOF 00:24:18.249 )") 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.249 { 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme$subsystem", 00:24:18.249 "trtype": "$TEST_TRANSPORT", 00:24:18.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "$NVMF_PORT", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.249 "hdgst": ${hdgst:-false}, 00:24:18.249 "ddgst": ${ddgst:-false} 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 } 00:24:18.249 EOF 00:24:18.249 )") 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.249 { 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme$subsystem", 00:24:18.249 "trtype": "$TEST_TRANSPORT", 00:24:18.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "$NVMF_PORT", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.249 "hdgst": ${hdgst:-false}, 00:24:18.249 "ddgst": ${ddgst:-false} 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 } 00:24:18.249 EOF 00:24:18.249 )") 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.249 { 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme$subsystem", 00:24:18.249 "trtype": "$TEST_TRANSPORT", 00:24:18.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "$NVMF_PORT", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.249 "hdgst": ${hdgst:-false}, 00:24:18.249 "ddgst": ${ddgst:-false} 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 } 00:24:18.249 EOF 00:24:18.249 )") 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # for subsystem in "${@:-1}" 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # config+=("$(cat <<-EOF 00:24:18.249 { 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme$subsystem", 00:24:18.249 "trtype": "$TEST_TRANSPORT", 00:24:18.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "$NVMF_PORT", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.249 "hdgst": ${hdgst:-false}, 00:24:18.249 "ddgst": ${ddgst:-false} 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 } 00:24:18.249 EOF 00:24:18.249 )") 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@580 -- # cat 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # jq . 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@583 -- # IFS=, 00:24:18.249 13:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # printf '%s\n' '{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme1", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme2", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme3", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme4", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme5", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme6", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme7", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme8", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme9", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 },{ 00:24:18.249 "params": { 00:24:18.249 "name": "Nvme10", 00:24:18.249 "trtype": "tcp", 00:24:18.249 "traddr": "10.0.0.2", 00:24:18.249 "adrfam": "ipv4", 00:24:18.249 "trsvcid": "4420", 00:24:18.249 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:18.249 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:18.249 "hdgst": false, 00:24:18.249 "ddgst": false 00:24:18.249 }, 00:24:18.249 "method": "bdev_nvme_attach_controller" 00:24:18.249 }' 00:24:18.249 [2024-10-07 13:34:59.894564] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:18.249 [2024-10-07 13:34:59.894643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1852704 ] 00:24:18.249 [2024-10-07 13:34:59.955332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.523 [2024-10-07 13:35:00.072561] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.427 Running I/O for 10 seconds... 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.427 13:35:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:20.427 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:24:20.687 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1852529 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1852529 ']' 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1852529 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.945 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1852529 00:24:21.223 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:21.223 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:21.223 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1852529' 00:24:21.223 killing process with pid 1852529 00:24:21.223 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 1852529 00:24:21.223 13:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 1852529 00:24:21.223 [2024-10-07 13:35:02.674517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.674989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.223 [2024-10-07 13:35:02.675567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.675579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.675595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.675608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.675619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18965b0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.678968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1899030 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.679880] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.224 [2024-10-07 13:35:02.687550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.224 [2024-10-07 13:35:02.687869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.687997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.688212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896aa0 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.689996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.225 [2024-10-07 13:35:02.690581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1896f70 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.691996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.692587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897460 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.226 [2024-10-07 13:35:02.693864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.693993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.694302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897930 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.227 [2024-10-07 13:35:02.695732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.695992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.696112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1897e20 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.697997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.228 [2024-10-07 13:35:02.698684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.698696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.698708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.698723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.698734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.698746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898670 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.699993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700027] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.700253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1898b40 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.702434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392080 is same with the state(6) to be set 00:24:21.229 [2024-10-07 13:35:02.702629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.229 [2024-10-07 13:35:02.702745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.229 [2024-10-07 13:35:02.702758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.702771] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b6000 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.702825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.702845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.702859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.702886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.702902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.702915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.702929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.702942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.702954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23031e0 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703094] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f1dc0 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811ab0 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811150 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398bb0 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703802] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390960 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.703854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.703955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.703967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2382af0 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.704013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.704033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.704061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.704087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:21.230 [2024-10-07 13:35:02.704114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f2b80 is same with the state(6) to be set 00:24:21.230 [2024-10-07 13:35:02.704784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.230 [2024-10-07 13:35:02.704973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.230 [2024-10-07 13:35:02.704987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.231 [2024-10-07 13:35:02.705963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.231 [2024-10-07 13:35:02.705977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.705992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.706713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.706727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279a590 is same with the state(6) to be set 00:24:21.232 [2024-10-07 13:35:02.706813] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x279a590 was disconnected and freed. reset controller. 00:24:21.232 [2024-10-07 13:35:02.707122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.232 [2024-10-07 13:35:02.707589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.232 [2024-10-07 13:35:02.707604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.707972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.707986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.233 [2024-10-07 13:35:02.708825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.233 [2024-10-07 13:35:02.708839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.708867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.708896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.708924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.708953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.708981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.708994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.709009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.709023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.709038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.709051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.709160] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x259dd70 was disconnected and freed. reset controller. 00:24:21.234 [2024-10-07 13:35:02.710926] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.710975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:21.234 [2024-10-07 13:35:02.711014] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2811ab0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.712766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:21.234 [2024-10-07 13:35:02.712802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392080 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.712880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b6000 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.712920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23031e0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.712952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f1dc0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.712986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2811150 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.713021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398bb0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.713050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390960 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.713079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2382af0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.713108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f2b80 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.713820] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.713906] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.714086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-10-07 13:35:02.714118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2811ab0 with addr=10.0.0.2, port=4420 00:24:21.234 [2024-10-07 13:35:02.714137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811ab0 is same with the state(6) to be set 00:24:21.234 [2024-10-07 13:35:02.714488] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.714630] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.714716] nvme_tcp.c:1252:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:21.234 [2024-10-07 13:35:02.714868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.234 [2024-10-07 13:35:02.714897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2392080 with addr=10.0.0.2, port=4420 00:24:21.234 [2024-10-07 13:35:02.714915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392080 is same with the state(6) to be set 00:24:21.234 [2024-10-07 13:35:02.714933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2811ab0 (9): Bad file descriptor 00:24:21.234 [2024-10-07 13:35:02.714995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.234 [2024-10-07 13:35:02.715709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.234 [2024-10-07 13:35:02.715724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.715974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.715990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.235 [2024-10-07 13:35:02.716935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.235 [2024-10-07 13:35:02.716949] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2883440 is same with the state(6) to be set 00:24:21.235 [2024-10-07 13:35:02.717063] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2883440 was disconnected and freed. reset controller. 00:24:21.235 [2024-10-07 13:35:02.717170] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392080 (9): Bad file descriptor 00:24:21.235 [2024-10-07 13:35:02.717195] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:21.236 [2024-10-07 13:35:02.717209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:21.236 [2024-10-07 13:35:02.717225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:21.236 [2024-10-07 13:35:02.718464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.718976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.718989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.236 [2024-10-07 13:35:02.719478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.236 [2024-10-07 13:35:02.719492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.719976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.719991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.720387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.720402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2884940 is same with the state(6) to be set 00:24:21.237 [2024-10-07 13:35:02.720481] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2884940 was disconnected and freed. reset controller. 00:24:21.237 [2024-10-07 13:35:02.720533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.237 [2024-10-07 13:35:02.720555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:21.237 [2024-10-07 13:35:02.720596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:21.237 [2024-10-07 13:35:02.720614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:21.237 [2024-10-07 13:35:02.720628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:21.237 [2024-10-07 13:35:02.721841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.237 [2024-10-07 13:35:02.721865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:21.237 [2024-10-07 13:35:02.722019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-10-07 13:35:02.722048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2390960 with addr=10.0.0.2, port=4420 00:24:21.237 [2024-10-07 13:35:02.722065] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390960 is same with the state(6) to be set 00:24:21.237 [2024-10-07 13:35:02.722503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.237 [2024-10-07 13:35:02.722530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2398bb0 with addr=10.0.0.2, port=4420 00:24:21.237 [2024-10-07 13:35:02.722547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398bb0 is same with the state(6) to be set 00:24:21.237 [2024-10-07 13:35:02.722566] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390960 (9): Bad file descriptor 00:24:21.237 [2024-10-07 13:35:02.722906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398bb0 (9): Bad file descriptor 00:24:21.237 [2024-10-07 13:35:02.722932] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:21.237 [2024-10-07 13:35:02.722945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:21.237 [2024-10-07 13:35:02.722959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:21.237 [2024-10-07 13:35:02.723084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.237 [2024-10-07 13:35:02.723137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:21.237 [2024-10-07 13:35:02.723159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:21.237 [2024-10-07 13:35:02.723173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:21.237 [2024-10-07 13:35:02.723235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.237 [2024-10-07 13:35:02.723255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.237 [2024-10-07 13:35:02.723275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.723983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.723996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.238 [2024-10-07 13:35:02.724446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.238 [2024-10-07 13:35:02.724459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.724979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.724992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.725008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.725021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.725036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.725053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.725070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.725084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.725100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.725113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.725127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x259cb30 is same with the state(6) to be set 00:24:21.239 [2024-10-07 13:35:02.726386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.239 [2024-10-07 13:35:02.726965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.239 [2024-10-07 13:35:02.726980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.726994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.727986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.240 [2024-10-07 13:35:02.727999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.240 [2024-10-07 13:35:02.728015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.728299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.728313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2885ec0 is same with the state(6) to be set 00:24:21.241 [2024-10-07 13:35:02.729552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.729982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.729997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.241 [2024-10-07 13:35:02.730441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.241 [2024-10-07 13:35:02.730455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.730980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.730993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.731484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.731499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2799010 is same with the state(6) to be set 00:24:21.242 [2024-10-07 13:35:02.732751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.732971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.242 [2024-10-07 13:35:02.732987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.242 [2024-10-07 13:35:02.733001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.733983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.733996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.243 [2024-10-07 13:35:02.734202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.243 [2024-10-07 13:35:02.734217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.734664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.734687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279bb10 is same with the state(6) to be set 00:24:21.244 [2024-10-07 13:35:02.735926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.735948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.735968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.735984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.244 [2024-10-07 13:35:02.736476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.244 [2024-10-07 13:35:02.736489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.736979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.736994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.245 [2024-10-07 13:35:02.737593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.245 [2024-10-07 13:35:02.737607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.737850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.737864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279cf50 is same with the state(6) to be set 00:24:21.246 [2024-10-07 13:35:02.739566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.739983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.739997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.246 [2024-10-07 13:35:02.740554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.246 [2024-10-07 13:35:02.740569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.740981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.740994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:21.247 [2024-10-07 13:35:02.741478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:21.247 [2024-10-07 13:35:02.741492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x279e4d0 is same with the state(6) to be set 00:24:21.247 [2024-10-07 13:35:02.743123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:21.247 [2024-10-07 13:35:02.743162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.247 [2024-10-07 13:35:02.743181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:21.247 [2024-10-07 13:35:02.743199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:21.247 [2024-10-07 13:35:02.743217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:21.247 [2024-10-07 13:35:02.743354] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.247 [2024-10-07 13:35:02.743381] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.247 [2024-10-07 13:35:02.743402] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.247 [2024-10-07 13:35:02.743502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:21.247 [2024-10-07 13:35:02.743528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:21.247 task offset: 24576 on job bdev=Nvme7n1 fails 00:24:21.247 00:24:21.247 Latency(us) 00:24:21.247 [2024-10-07T11:35:02.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:21.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme1n1 ended in about 0.94 seconds with error 00:24:21.247 Verification LBA range: start 0x0 length 0x400 00:24:21.247 Nvme1n1 : 0.94 135.48 8.47 67.74 0.00 311563.50 22039.51 265639.25 00:24:21.247 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme2n1 ended in about 0.93 seconds with error 00:24:21.247 Verification LBA range: start 0x0 length 0x400 00:24:21.247 Nvme2n1 : 0.93 217.04 13.56 68.77 0.00 217015.59 6941.96 237677.23 00:24:21.247 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme3n1 ended in about 0.94 seconds with error 00:24:21.247 Verification LBA range: start 0x0 length 0x400 00:24:21.247 Nvme3n1 : 0.94 204.92 12.81 68.31 0.00 222528.19 5242.88 257872.02 00:24:21.247 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme4n1 ended in about 0.94 seconds with error 00:24:21.247 Verification LBA range: start 0x0 length 0x400 00:24:21.247 Nvme4n1 : 0.94 208.44 13.03 68.06 0.00 215427.47 12039.21 276513.37 00:24:21.247 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme5n1 ended in about 0.95 seconds with error 00:24:21.247 Verification LBA range: start 0x0 length 0x400 00:24:21.247 Nvme5n1 : 0.95 135.03 8.44 67.51 0.00 288194.31 22233.69 267192.70 00:24:21.247 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.247 Job: Nvme6n1 ended in about 0.95 seconds with error 00:24:21.248 Verification LBA range: start 0x0 length 0x400 00:24:21.248 Nvme6n1 : 0.95 134.58 8.41 67.29 0.00 283143.08 23107.51 257872.02 00:24:21.248 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.248 Job: Nvme7n1 ended in about 0.93 seconds with error 00:24:21.248 Verification LBA range: start 0x0 length 0x400 00:24:21.248 Nvme7n1 : 0.93 206.61 12.91 68.87 0.00 202284.56 7815.77 267192.70 00:24:21.248 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.248 Job: Nvme8n1 ended in about 0.95 seconds with error 00:24:21.248 Verification LBA range: start 0x0 length 0x400 00:24:21.248 Nvme8n1 : 0.95 134.13 8.38 67.06 0.00 272224.46 16408.27 260978.92 00:24:21.248 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.248 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:21.248 Verification LBA range: start 0x0 length 0x400 00:24:21.248 Nvme9n1 : 0.96 133.68 8.36 66.84 0.00 267465.26 25631.86 298261.62 00:24:21.248 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:21.248 Job: Nvme10n1 ended in about 0.96 seconds with error 00:24:21.248 Verification LBA range: start 0x0 length 0x400 00:24:21.248 Nvme10n1 : 0.96 137.34 8.58 66.59 0.00 257654.56 20194.80 268746.15 00:24:21.248 [2024-10-07T11:35:02.960Z] =================================================================================================================== 00:24:21.248 [2024-10-07T11:35:02.960Z] Total : 1647.25 102.95 677.04 0.00 248918.44 5242.88 298261.62 00:24:21.248 [2024-10-07 13:35:02.770380] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:21.248 [2024-10-07 13:35:02.770486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:21.248 [2024-10-07 13:35:02.770834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.770873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2811ab0 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.770895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811ab0 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.770994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.771021] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2382af0 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.771038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2382af0 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.771146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.771172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27b6000 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.771189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27b6000 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.771281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.771308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23031e0 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.771325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23031e0 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.773047] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:21.248 [2024-10-07 13:35:02.773079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:21.248 [2024-10-07 13:35:02.773252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.773281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2811150 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.773298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2811150 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.773374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.773401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27f2b80 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.773417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f2b80 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.773512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.773539] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x27f1dc0 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.773555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27f1dc0 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.773581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2811ab0 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.773604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2382af0 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.773637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27b6000 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.773656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23031e0 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.773749] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.248 [2024-10-07 13:35:02.773783] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.248 [2024-10-07 13:35:02.773802] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.248 [2024-10-07 13:35:02.773822] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.248 [2024-10-07 13:35:02.773843] bdev_nvme.c:3031:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:21.248 [2024-10-07 13:35:02.774190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:21.248 [2024-10-07 13:35:02.774331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.774359] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2392080 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.774376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2392080 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.774470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.774495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2390960 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.774511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2390960 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.774529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2811150 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.774548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f2b80 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.774565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x27f1dc0 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.774581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.774595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.774610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:21.248 [2024-10-07 13:35:02.774631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.774645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.774658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.248 [2024-10-07 13:35:02.774685] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.774700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.774713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:21.248 [2024-10-07 13:35:02.774730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.774744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.774756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:21.248 [2024-10-07 13:35:02.774858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.774885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.774898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.774909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.774989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.248 [2024-10-07 13:35:02.775015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2398bb0 with addr=10.0.0.2, port=4420 00:24:21.248 [2024-10-07 13:35:02.775031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398bb0 is same with the state(6) to be set 00:24:21.248 [2024-10-07 13:35:02.775049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2392080 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.775067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390960 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.775083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.775096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.775108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:21.248 [2024-10-07 13:35:02.775125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.775140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.775152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:21.248 [2024-10-07 13:35:02.775167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.775180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:21.248 [2024-10-07 13:35:02.775193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:21.248 [2024-10-07 13:35:02.775234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.775253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.775265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.248 [2024-10-07 13:35:02.775279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2398bb0 (9): Bad file descriptor 00:24:21.248 [2024-10-07 13:35:02.775296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:21.248 [2024-10-07 13:35:02.775308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:21.249 [2024-10-07 13:35:02.775321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:21.249 [2024-10-07 13:35:02.775337] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:21.249 [2024-10-07 13:35:02.775351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:21.249 [2024-10-07 13:35:02.775363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:21.249 [2024-10-07 13:35:02.775404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.249 [2024-10-07 13:35:02.775423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.249 [2024-10-07 13:35:02.775435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:21.249 [2024-10-07 13:35:02.775447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:21.249 [2024-10-07 13:35:02.775465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:21.249 [2024-10-07 13:35:02.775502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.814 13:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1852704 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1852704 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1852704 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.750 rmmod nvme_tcp 00:24:22.750 rmmod nvme_fabrics 00:24:22.750 rmmod nvme_keyring 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@515 -- # '[' -n 1852529 ']' 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # killprocess 1852529 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 1852529 ']' 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 1852529 00:24:22.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1852529) - No such process 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1852529 is not found' 00:24:22.750 Process with pid 1852529 is not found 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-save 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@789 -- # iptables-restore 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:22.750 13:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.288 00:24:25.288 real 0m7.681s 00:24:25.288 user 0m19.080s 00:24:25.288 sys 0m1.514s 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:25.288 ************************************ 00:24:25.288 END TEST nvmf_shutdown_tc3 00:24:25.288 ************************************ 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:25.288 ************************************ 00:24:25.288 START TEST nvmf_shutdown_tc4 00:24:25.288 ************************************ 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc4 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:25.288 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:25.289 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:25.289 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:25.289 Found net devices under 0000:09:00.0: cvl_0_0 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:25.289 Found net devices under 0000:09:00.1: cvl_0_1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # is_hw=yes 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:25.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:25.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:24:25.289 00:24:25.289 --- 10.0.0.2 ping statistics --- 00:24:25.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.289 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:25.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:25.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:24:25.289 00:24:25.289 --- 10.0.0.1 ping statistics --- 00:24:25.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:25.289 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:25.289 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@448 -- # return 0 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # nvmfpid=1853581 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # waitforlisten 1853581 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@831 -- # '[' -z 1853581 ']' 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:25.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 [2024-10-07 13:35:06.657899] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:25.290 [2024-10-07 13:35:06.657975] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.290 [2024-10-07 13:35:06.720789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:25.290 [2024-10-07 13:35:06.826213] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:25.290 [2024-10-07 13:35:06.826273] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:25.290 [2024-10-07 13:35:06.826301] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:25.290 [2024-10-07 13:35:06.826313] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:25.290 [2024-10-07 13:35:06.826323] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:25.290 [2024-10-07 13:35:06.827718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.290 [2024-10-07 13:35:06.827786] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.290 [2024-10-07 13:35:06.827855] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:24:25.290 [2024-10-07 13:35:06.827859] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # return 0 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 [2024-10-07 13:35:06.983631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.290 13:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.548 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:25.548 Malloc1 00:24:25.548 [2024-10-07 13:35:07.073156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.548 Malloc2 00:24:25.548 Malloc3 00:24:25.548 Malloc4 00:24:25.548 Malloc5 00:24:25.806 Malloc6 00:24:25.806 Malloc7 00:24:25.806 Malloc8 00:24:25.806 Malloc9 00:24:25.806 Malloc10 00:24:25.806 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.806 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:25.806 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:25.806 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:26.066 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1853672 00:24:26.066 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:26.066 13:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:26.066 [2024-10-07 13:35:07.589634] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1853581 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1853581 ']' 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1853581 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # uname 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1853581 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1853581' 00:24:31.349 killing process with pid 1853581 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@969 -- # kill 1853581 00:24:31.349 13:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@974 -- # wait 1853581 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.599895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 [2024-10-07 13:35:12.600599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.600679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.600698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 [2024-10-07 13:35:12.600723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 [2024-10-07 13:35:12.600735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 [2024-10-07 13:35:12.600747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with tWrite completed with error (sct=0, sc=8) 00:24:31.349 he state(6) to be set 00:24:31.349 starting I/O failed: -6 00:24:31.349 [2024-10-07 13:35:12.600760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.600772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 [2024-10-07 13:35:12.600784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.600796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb0cd0 is same with the state(6) to be set 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 [2024-10-07 13:35:12.600974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 Write completed with error (sct=0, sc=8) 00:24:31.349 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.601714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.601763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with tWrite completed with error (sct=0, sc=8) 00:24:31.350 he state(6) to be set 00:24:31.350 [2024-10-07 13:35:12.601782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.601795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.601808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.601821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.601833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb1690 is same with the state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.602136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.602546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.602572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.602587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with tWrite completed with error (sct=0, sc=8) 00:24:31.350 he state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.602604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.602617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.602629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.602641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3390 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.603090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with tstarting I/O failed: -6 00:24:31.350 he state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.603124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.603140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with tstarting I/O failed: -6 00:24:31.350 he state(6) to be set 00:24:31.350 [2024-10-07 13:35:12.603154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with tWrite completed with error (sct=0, sc=8) 00:24:31.350 he state(6) to be set 00:24:31.350 starting I/O failed: -6 00:24:31.350 [2024-10-07 13:35:12.603167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with the state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 [2024-10-07 13:35:12.603179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3860 is same with tstarting I/O failed: -6 00:24:31.350 he state(6) to be set 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.350 starting I/O failed: -6 00:24:31.350 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 [2024-10-07 13:35:12.603583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.603693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb3d30 is same with the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 [2024-10-07 13:35:12.603774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.351 NVMe io qpair process completion error 00:24:31.351 [2024-10-07 13:35:12.604728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.604846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfb2ec0 is same with the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 [2024-10-07 13:35:12.606904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.351 [2024-10-07 13:35:12.607236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 [2024-10-07 13:35:12.607264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 [2024-10-07 13:35:12.607281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.607293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 [2024-10-07 13:35:12.607305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with the state(6) to be set 00:24:31.351 [2024-10-07 13:35:12.607316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10dbcd0 is same with Write completed with error (sct=0, sc=8) 00:24:31.351 the state(6) to be set 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 [2024-10-07 13:35:12.608283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.351 NVMe io qpair process completion error 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 starting I/O failed: -6 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.351 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 [2024-10-07 13:35:12.609536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 [2024-10-07 13:35:12.610638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 [2024-10-07 13:35:12.611797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.352 Write completed with error (sct=0, sc=8) 00:24:31.352 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 [2024-10-07 13:35:12.613646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.353 NVMe io qpair process completion error 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 [2024-10-07 13:35:12.615023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 [2024-10-07 13:35:12.616101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.353 Write completed with error (sct=0, sc=8) 00:24:31.353 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 [2024-10-07 13:35:12.617268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 [2024-10-07 13:35:12.619738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.354 NVMe io qpair process completion error 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 starting I/O failed: -6 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.354 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 [2024-10-07 13:35:12.621120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 [2024-10-07 13:35:12.622197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.355 starting I/O failed: -6 00:24:31.355 starting I/O failed: -6 00:24:31.355 starting I/O failed: -6 00:24:31.355 starting I/O failed: -6 00:24:31.355 starting I/O failed: -6 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 [2024-10-07 13:35:12.623652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.355 Write completed with error (sct=0, sc=8) 00:24:31.355 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 [2024-10-07 13:35:12.625433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.356 NVMe io qpair process completion error 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 [2024-10-07 13:35:12.626830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 [2024-10-07 13:35:12.627825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.356 starting I/O failed: -6 00:24:31.356 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 [2024-10-07 13:35:12.629078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 [2024-10-07 13:35:12.631751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.357 NVMe io qpair process completion error 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.357 starting I/O failed: -6 00:24:31.357 Write completed with error (sct=0, sc=8) 00:24:31.358 [2024-10-07 13:35:12.633129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 [2024-10-07 13:35:12.634088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 [2024-10-07 13:35:12.635541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.358 Write completed with error (sct=0, sc=8) 00:24:31.358 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 [2024-10-07 13:35:12.638351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.359 NVMe io qpair process completion error 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 [2024-10-07 13:35:12.639524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 [2024-10-07 13:35:12.640605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.359 starting I/O failed: -6 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.359 Write completed with error (sct=0, sc=8) 00:24:31.359 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 [2024-10-07 13:35:12.641845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 [2024-10-07 13:35:12.643843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.360 NVMe io qpair process completion error 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 [2024-10-07 13:35:12.645193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 starting I/O failed: -6 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.360 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 [2024-10-07 13:35:12.646297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 [2024-10-07 13:35:12.647472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.361 starting I/O failed: -6 00:24:31.361 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 [2024-10-07 13:35:12.649478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.362 NVMe io qpair process completion error 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 [2024-10-07 13:35:12.650827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 [2024-10-07 13:35:12.651897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 Write completed with error (sct=0, sc=8) 00:24:31.362 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 [2024-10-07 13:35:12.653031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 starting I/O failed: -6 00:24:31.363 [2024-10-07 13:35:12.657294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.363 NVMe io qpair process completion error 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.363 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Write completed with error (sct=0, sc=8) 00:24:31.364 Initializing NVMe Controllers 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:31.364 Controller IO queue size 128, less than required. 00:24:31.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:31.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:31.364 Initialization complete. Launching workers. 00:24:31.364 ======================================================== 00:24:31.364 Latency(us) 00:24:31.364 Device Information : IOPS MiB/s Average min max 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1692.06 72.71 75673.47 986.80 127815.34 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1721.02 73.95 74427.78 992.00 154571.14 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1728.68 74.28 74137.91 994.11 133973.64 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1711.65 73.55 74915.96 1119.18 123840.06 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1727.83 74.24 74244.09 959.94 139880.78 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1778.72 76.43 72148.04 1107.77 142323.65 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1755.30 75.42 72262.60 1156.74 122075.26 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1806.41 77.62 70834.95 701.52 121996.88 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1738.90 74.72 72975.03 1096.39 121885.13 00:24:31.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1706.32 73.32 74397.35 935.96 124563.96 00:24:31.364 ======================================================== 00:24:31.364 Total : 17366.90 746.23 73575.65 701.52 154571.14 00:24:31.364 00:24:31.364 [2024-10-07 13:35:12.666285] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f7bb0 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5780 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc040 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5ab0 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666557] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc370 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666619] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f5de0 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666683] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f77f0 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fbd10 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10f79d0 is same with the state(6) to be set 00:24:31.364 [2024-10-07 13:35:12.666854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10fc6a0 is same with the state(6) to be set 00:24:31.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:31.625 13:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1853672 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1853672 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1853672 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.562 rmmod nvme_tcp 00:24:32.562 rmmod nvme_fabrics 00:24:32.562 rmmod nvme_keyring 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@515 -- # '[' -n 1853581 ']' 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # killprocess 1853581 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@950 -- # '[' -z 1853581 ']' 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # kill -0 1853581 00:24:32.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1853581) - No such process 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@977 -- # echo 'Process with pid 1853581 is not found' 00:24:32.562 Process with pid 1853581 is not found 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-save 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@789 -- # iptables-restore 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.562 13:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:35.157 00:24:35.157 real 0m9.849s 00:24:35.157 user 0m23.227s 00:24:35.157 sys 0m5.890s 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.157 ************************************ 00:24:35.157 END TEST nvmf_shutdown_tc4 00:24:35.157 ************************************ 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:35.157 00:24:35.157 real 0m37.751s 00:24:35.157 user 1m41.205s 00:24:35.157 sys 0m12.307s 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:35.157 ************************************ 00:24:35.157 END TEST nvmf_shutdown 00:24:35.157 ************************************ 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:24:35.157 00:24:35.157 real 11m35.684s 00:24:35.157 user 27m43.174s 00:24:35.157 sys 2m43.345s 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.157 13:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:35.157 ************************************ 00:24:35.157 END TEST nvmf_target_extra 00:24:35.157 ************************************ 00:24:35.157 13:35:16 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:35.157 13:35:16 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:35.157 13:35:16 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.157 13:35:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:35.157 ************************************ 00:24:35.157 START TEST nvmf_host 00:24:35.157 ************************************ 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:35.157 * Looking for test storage... 00:24:35.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.157 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:35.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.158 --rc genhtml_branch_coverage=1 00:24:35.158 --rc genhtml_function_coverage=1 00:24:35.158 --rc genhtml_legend=1 00:24:35.158 --rc geninfo_all_blocks=1 00:24:35.158 --rc geninfo_unexecuted_blocks=1 00:24:35.158 00:24:35.158 ' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:35.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.158 --rc genhtml_branch_coverage=1 00:24:35.158 --rc genhtml_function_coverage=1 00:24:35.158 --rc genhtml_legend=1 00:24:35.158 --rc geninfo_all_blocks=1 00:24:35.158 --rc geninfo_unexecuted_blocks=1 00:24:35.158 00:24:35.158 ' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:35.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.158 --rc genhtml_branch_coverage=1 00:24:35.158 --rc genhtml_function_coverage=1 00:24:35.158 --rc genhtml_legend=1 00:24:35.158 --rc geninfo_all_blocks=1 00:24:35.158 --rc geninfo_unexecuted_blocks=1 00:24:35.158 00:24:35.158 ' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:35.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.158 --rc genhtml_branch_coverage=1 00:24:35.158 --rc genhtml_function_coverage=1 00:24:35.158 --rc genhtml_legend=1 00:24:35.158 --rc geninfo_all_blocks=1 00:24:35.158 --rc geninfo_unexecuted_blocks=1 00:24:35.158 00:24:35.158 ' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.158 ************************************ 00:24:35.158 START TEST nvmf_multicontroller 00:24:35.158 ************************************ 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:35.158 * Looking for test storage... 00:24:35.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lcov --version 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:35.158 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.159 --rc genhtml_branch_coverage=1 00:24:35.159 --rc genhtml_function_coverage=1 00:24:35.159 --rc genhtml_legend=1 00:24:35.159 --rc geninfo_all_blocks=1 00:24:35.159 --rc geninfo_unexecuted_blocks=1 00:24:35.159 00:24:35.159 ' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.159 --rc genhtml_branch_coverage=1 00:24:35.159 --rc genhtml_function_coverage=1 00:24:35.159 --rc genhtml_legend=1 00:24:35.159 --rc geninfo_all_blocks=1 00:24:35.159 --rc geninfo_unexecuted_blocks=1 00:24:35.159 00:24:35.159 ' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.159 --rc genhtml_branch_coverage=1 00:24:35.159 --rc genhtml_function_coverage=1 00:24:35.159 --rc genhtml_legend=1 00:24:35.159 --rc geninfo_all_blocks=1 00:24:35.159 --rc geninfo_unexecuted_blocks=1 00:24:35.159 00:24:35.159 ' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:35.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.159 --rc genhtml_branch_coverage=1 00:24:35.159 --rc genhtml_function_coverage=1 00:24:35.159 --rc genhtml_legend=1 00:24:35.159 --rc geninfo_all_blocks=1 00:24:35.159 --rc geninfo_unexecuted_blocks=1 00:24:35.159 00:24:35.159 ' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.159 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:35.159 13:35:16 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:37.690 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:37.690 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:37.690 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:37.691 Found net devices under 0000:09:00.0: cvl_0_0 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:37.691 Found net devices under 0000:09:00.1: cvl_0_1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # is_hw=yes 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:37.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:37.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:24:37.691 00:24:37.691 --- 10.0.0.2 ping statistics --- 00:24:37.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.691 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:37.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:37.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:24:37.691 00:24:37.691 --- 10.0.0.1 ping statistics --- 00:24:37.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:37.691 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # return 0 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # nvmfpid=1856422 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # waitforlisten 1856422 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1856422 ']' 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.691 13:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 [2024-10-07 13:35:19.049128] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:37.691 [2024-10-07 13:35:19.049229] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.691 [2024-10-07 13:35:19.110805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:37.691 [2024-10-07 13:35:19.213255] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.691 [2024-10-07 13:35:19.213336] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.691 [2024-10-07 13:35:19.213359] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.691 [2024-10-07 13:35:19.213369] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.691 [2024-10-07 13:35:19.213378] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.691 [2024-10-07 13:35:19.214188] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.691 [2024-10-07 13:35:19.214254] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.691 [2024-10-07 13:35:19.214257] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 [2024-10-07 13:35:19.345985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 Malloc0 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.691 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.692 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.692 [2024-10-07 13:35:19.401133] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 [2024-10-07 13:35:19.408990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 Malloc1 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1856455 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1856455 /var/tmp/bdevperf.sock 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 1856455 ']' 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.950 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.208 NVMe0n1 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.208 1 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.208 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.208 request: 00:24:38.208 { 00:24:38.208 "name": "NVMe0", 00:24:38.208 "trtype": "tcp", 00:24:38.208 "traddr": "10.0.0.2", 00:24:38.208 "adrfam": "ipv4", 00:24:38.208 "trsvcid": "4420", 00:24:38.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.209 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:38.209 "hostaddr": "10.0.0.1", 00:24:38.209 "prchk_reftag": false, 00:24:38.209 "prchk_guard": false, 00:24:38.209 "hdgst": false, 00:24:38.209 "ddgst": false, 00:24:38.209 "allow_unrecognized_csi": false, 00:24:38.209 "method": "bdev_nvme_attach_controller", 00:24:38.209 "req_id": 1 00:24:38.209 } 00:24:38.209 Got JSON-RPC error response 00:24:38.209 response: 00:24:38.209 { 00:24:38.209 "code": -114, 00:24:38.209 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:38.209 } 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.209 request: 00:24:38.209 { 00:24:38.209 "name": "NVMe0", 00:24:38.209 "trtype": "tcp", 00:24:38.209 "traddr": "10.0.0.2", 00:24:38.209 "adrfam": "ipv4", 00:24:38.209 "trsvcid": "4420", 00:24:38.209 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:38.209 "hostaddr": "10.0.0.1", 00:24:38.209 "prchk_reftag": false, 00:24:38.209 "prchk_guard": false, 00:24:38.209 "hdgst": false, 00:24:38.209 "ddgst": false, 00:24:38.209 "allow_unrecognized_csi": false, 00:24:38.209 "method": "bdev_nvme_attach_controller", 00:24:38.209 "req_id": 1 00:24:38.209 } 00:24:38.209 Got JSON-RPC error response 00:24:38.209 response: 00:24:38.209 { 00:24:38.209 "code": -114, 00:24:38.209 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:38.209 } 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.209 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.469 request: 00:24:38.469 { 00:24:38.469 "name": "NVMe0", 00:24:38.469 "trtype": "tcp", 00:24:38.469 "traddr": "10.0.0.2", 00:24:38.469 "adrfam": "ipv4", 00:24:38.469 "trsvcid": "4420", 00:24:38.469 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.469 "hostaddr": "10.0.0.1", 00:24:38.469 "prchk_reftag": false, 00:24:38.469 "prchk_guard": false, 00:24:38.469 "hdgst": false, 00:24:38.469 "ddgst": false, 00:24:38.469 "multipath": "disable", 00:24:38.469 "allow_unrecognized_csi": false, 00:24:38.469 "method": "bdev_nvme_attach_controller", 00:24:38.469 "req_id": 1 00:24:38.469 } 00:24:38.469 Got JSON-RPC error response 00:24:38.469 response: 00:24:38.469 { 00:24:38.469 "code": -114, 00:24:38.469 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:38.469 } 00:24:38.469 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.469 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:38.469 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.469 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.469 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.470 request: 00:24:38.470 { 00:24:38.470 "name": "NVMe0", 00:24:38.470 "trtype": "tcp", 00:24:38.470 "traddr": "10.0.0.2", 00:24:38.470 "adrfam": "ipv4", 00:24:38.470 "trsvcid": "4420", 00:24:38.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:38.470 "hostaddr": "10.0.0.1", 00:24:38.470 "prchk_reftag": false, 00:24:38.470 "prchk_guard": false, 00:24:38.470 "hdgst": false, 00:24:38.470 "ddgst": false, 00:24:38.470 "multipath": "failover", 00:24:38.470 "allow_unrecognized_csi": false, 00:24:38.470 "method": "bdev_nvme_attach_controller", 00:24:38.470 "req_id": 1 00:24:38.470 } 00:24:38.470 Got JSON-RPC error response 00:24:38.470 response: 00:24:38.470 { 00:24:38.470 "code": -114, 00:24:38.470 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:38.470 } 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.470 13:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.470 NVMe0n1 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.470 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.730 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:38.730 13:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:39.664 { 00:24:39.664 "results": [ 00:24:39.664 { 00:24:39.664 "job": "NVMe0n1", 00:24:39.664 "core_mask": "0x1", 00:24:39.664 "workload": "write", 00:24:39.664 "status": "finished", 00:24:39.664 "queue_depth": 128, 00:24:39.664 "io_size": 4096, 00:24:39.664 "runtime": 1.009596, 00:24:39.664 "iops": 18180.539542549694, 00:24:39.664 "mibps": 71.01773258808474, 00:24:39.664 "io_failed": 0, 00:24:39.664 "io_timeout": 0, 00:24:39.664 "avg_latency_us": 7026.670250955942, 00:24:39.664 "min_latency_us": 4223.431111111111, 00:24:39.664 "max_latency_us": 16602.453333333335 00:24:39.664 } 00:24:39.664 ], 00:24:39.664 "core_count": 1 00:24:39.664 } 00:24:39.664 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:39.664 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:39.664 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1856455 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1856455 ']' 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1856455 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856455 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856455' 00:24:39.923 killing process with pid 1856455 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1856455 00:24:39.923 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1856455 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:24:40.184 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:40.184 [2024-10-07 13:35:19.511193] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:40.184 [2024-10-07 13:35:19.511294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1856455 ] 00:24:40.184 [2024-10-07 13:35:19.568807] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.184 [2024-10-07 13:35:19.680808] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.184 [2024-10-07 13:35:20.198432] bdev.c:4701:bdev_name_add: *ERROR*: Bdev name 1bce9aae-0be8-4bf4-920f-7ea8986ea6bb already exists 00:24:40.184 [2024-10-07 13:35:20.198475] bdev.c:7846:bdev_register: *ERROR*: Unable to add uuid:1bce9aae-0be8-4bf4-920f-7ea8986ea6bb alias for bdev NVMe1n1 00:24:40.184 [2024-10-07 13:35:20.198490] bdev_nvme.c:4483:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:40.184 Running I/O for 1 seconds... 00:24:40.184 18117.00 IOPS, 70.77 MiB/s 00:24:40.184 Latency(us) 00:24:40.184 [2024-10-07T11:35:21.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.184 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:40.184 NVMe0n1 : 1.01 18180.54 71.02 0.00 0.00 7026.67 4223.43 16602.45 00:24:40.184 [2024-10-07T11:35:21.896Z] =================================================================================================================== 00:24:40.184 [2024-10-07T11:35:21.896Z] Total : 18180.54 71.02 0.00 0.00 7026.67 4223.43 16602.45 00:24:40.184 Received shutdown signal, test time was about 1.000000 seconds 00:24:40.184 00:24:40.184 Latency(us) 00:24:40.184 [2024-10-07T11:35:21.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.184 [2024-10-07T11:35:21.896Z] =================================================================================================================== 00:24:40.184 [2024-10-07T11:35:21.896Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:40.184 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:40.184 rmmod nvme_tcp 00:24:40.184 rmmod nvme_fabrics 00:24:40.184 rmmod nvme_keyring 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@515 -- # '[' -n 1856422 ']' 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # killprocess 1856422 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 1856422 ']' 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 1856422 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:24:40.184 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1856422 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1856422' 00:24:40.185 killing process with pid 1856422 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 1856422 00:24:40.185 13:35:21 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 1856422 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-save 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@789 -- # iptables-restore 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.444 13:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:42.971 00:24:42.971 real 0m7.603s 00:24:42.971 user 0m11.793s 00:24:42.971 sys 0m2.349s 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:42.971 ************************************ 00:24:42.971 END TEST nvmf_multicontroller 00:24:42.971 ************************************ 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.971 ************************************ 00:24:42.971 START TEST nvmf_aer 00:24:42.971 ************************************ 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:42.971 * Looking for test storage... 00:24:42.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lcov --version 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:42.971 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.972 --rc genhtml_branch_coverage=1 00:24:42.972 --rc genhtml_function_coverage=1 00:24:42.972 --rc genhtml_legend=1 00:24:42.972 --rc geninfo_all_blocks=1 00:24:42.972 --rc geninfo_unexecuted_blocks=1 00:24:42.972 00:24:42.972 ' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.972 --rc genhtml_branch_coverage=1 00:24:42.972 --rc genhtml_function_coverage=1 00:24:42.972 --rc genhtml_legend=1 00:24:42.972 --rc geninfo_all_blocks=1 00:24:42.972 --rc geninfo_unexecuted_blocks=1 00:24:42.972 00:24:42.972 ' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.972 --rc genhtml_branch_coverage=1 00:24:42.972 --rc genhtml_function_coverage=1 00:24:42.972 --rc genhtml_legend=1 00:24:42.972 --rc geninfo_all_blocks=1 00:24:42.972 --rc geninfo_unexecuted_blocks=1 00:24:42.972 00:24:42.972 ' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:42.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:42.972 --rc genhtml_branch_coverage=1 00:24:42.972 --rc genhtml_function_coverage=1 00:24:42.972 --rc genhtml_legend=1 00:24:42.972 --rc geninfo_all_blocks=1 00:24:42.972 --rc geninfo_unexecuted_blocks=1 00:24:42.972 00:24:42.972 ' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:42.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:42.972 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:42.973 13:35:24 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:44.874 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:44.874 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:44.874 Found net devices under 0000:09:00.0: cvl_0_0 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:44.874 Found net devices under 0000:09:00.1: cvl_0_1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # is_hw=yes 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:44.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:24:44.874 00:24:44.874 --- 10.0.0.2 ping statistics --- 00:24:44.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.874 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:44.874 00:24:44.874 --- 10.0.0.1 ping statistics --- 00:24:44.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.874 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # return 0 00:24:44.874 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:44.875 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.875 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:44.875 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:44.875 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # nvmfpid=1858572 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # waitforlisten 1858572 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 1858572 ']' 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.133 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.133 [2024-10-07 13:35:26.659844] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:45.133 [2024-10-07 13:35:26.659938] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.133 [2024-10-07 13:35:26.725878] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:45.133 [2024-10-07 13:35:26.834593] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.133 [2024-10-07 13:35:26.834680] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.133 [2024-10-07 13:35:26.834708] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.133 [2024-10-07 13:35:26.834734] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.133 [2024-10-07 13:35:26.834744] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.133 [2024-10-07 13:35:26.836492] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.133 [2024-10-07 13:35:26.836558] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:45.133 [2024-10-07 13:35:26.838687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:45.133 [2024-10-07 13:35:26.838695] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:26 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 [2024-10-07 13:35:26.998126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 Malloc0 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 [2024-10-07 13:35:27.051102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.391 [ 00:24:45.391 { 00:24:45.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:45.391 "subtype": "Discovery", 00:24:45.391 "listen_addresses": [], 00:24:45.391 "allow_any_host": true, 00:24:45.391 "hosts": [] 00:24:45.391 }, 00:24:45.391 { 00:24:45.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.391 "subtype": "NVMe", 00:24:45.391 "listen_addresses": [ 00:24:45.391 { 00:24:45.391 "trtype": "TCP", 00:24:45.391 "adrfam": "IPv4", 00:24:45.391 "traddr": "10.0.0.2", 00:24:45.391 "trsvcid": "4420" 00:24:45.391 } 00:24:45.391 ], 00:24:45.391 "allow_any_host": true, 00:24:45.391 "hosts": [], 00:24:45.391 "serial_number": "SPDK00000000000001", 00:24:45.391 "model_number": "SPDK bdev Controller", 00:24:45.391 "max_namespaces": 2, 00:24:45.391 "min_cntlid": 1, 00:24:45.391 "max_cntlid": 65519, 00:24:45.391 "namespaces": [ 00:24:45.391 { 00:24:45.391 "nsid": 1, 00:24:45.391 "bdev_name": "Malloc0", 00:24:45.391 "name": "Malloc0", 00:24:45.391 "nguid": "EE75171DFA424209A0E25D2D6567D5B4", 00:24:45.391 "uuid": "ee75171d-fa42-4209-a0e2-5d2d6567d5b4" 00:24:45.391 } 00:24:45.391 ] 00:24:45.391 } 00:24:45.391 ] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1858706 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:45.391 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:45.650 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:45.650 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:45.650 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:45.651 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:45.651 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:45.651 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:24:45.651 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:24:45.651 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.909 Malloc1 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.909 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.909 [ 00:24:45.909 { 00:24:45.909 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:45.909 "subtype": "Discovery", 00:24:45.909 "listen_addresses": [], 00:24:45.909 "allow_any_host": true, 00:24:45.909 "hosts": [] 00:24:45.909 }, 00:24:45.909 { 00:24:45.909 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:45.909 "subtype": "NVMe", 00:24:45.909 "listen_addresses": [ 00:24:45.909 { 00:24:45.909 "trtype": "TCP", 00:24:45.909 "adrfam": "IPv4", 00:24:45.909 "traddr": "10.0.0.2", 00:24:45.909 "trsvcid": "4420" 00:24:45.909 } 00:24:45.909 ], 00:24:45.909 "allow_any_host": true, 00:24:45.909 "hosts": [], 00:24:45.909 "serial_number": "SPDK00000000000001", 00:24:45.909 "model_number": "SPDK bdev Controller", 00:24:45.909 "max_namespaces": 2, 00:24:45.909 "min_cntlid": 1, 00:24:45.909 "max_cntlid": 65519, 00:24:45.909 "namespaces": [ 00:24:45.909 { 00:24:45.909 "nsid": 1, 00:24:45.909 "bdev_name": "Malloc0", 00:24:45.909 "name": "Malloc0", 00:24:45.909 "nguid": "EE75171DFA424209A0E25D2D6567D5B4", 00:24:45.909 "uuid": "ee75171d-fa42-4209-a0e2-5d2d6567d5b4" 00:24:45.909 }, 00:24:45.909 { 00:24:45.909 "nsid": 2, 00:24:45.909 "bdev_name": "Malloc1", 00:24:45.909 "name": "Malloc1", 00:24:45.909 "nguid": "0CA81ECAC678439E94F2865B1FA585F5", 00:24:45.909 "uuid": "0ca81eca-c678-439e-94f2-865b1fa585f5" 00:24:45.909 } 00:24:45.909 ] 00:24:45.909 } 00:24:45.909 ] 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1858706 00:24:45.910 Asynchronous Event Request test 00:24:45.910 Attaching to 10.0.0.2 00:24:45.910 Attached to 10.0.0.2 00:24:45.910 Registering asynchronous event callbacks... 00:24:45.910 Starting namespace attribute notice tests for all controllers... 00:24:45.910 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:45.910 aer_cb - Changed Namespace 00:24:45.910 Cleaning up... 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:45.910 rmmod nvme_tcp 00:24:45.910 rmmod nvme_fabrics 00:24:45.910 rmmod nvme_keyring 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@515 -- # '[' -n 1858572 ']' 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # killprocess 1858572 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 1858572 ']' 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 1858572 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:45.910 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1858572 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1858572' 00:24:46.169 killing process with pid 1858572 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 1858572 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 1858572 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-save 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:46.169 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@789 -- # iptables-restore 00:24:46.427 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:46.427 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:46.427 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:46.427 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:46.427 13:35:27 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:48.333 00:24:48.333 real 0m5.693s 00:24:48.333 user 0m4.797s 00:24:48.333 sys 0m2.060s 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:48.333 ************************************ 00:24:48.333 END TEST nvmf_aer 00:24:48.333 ************************************ 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.333 ************************************ 00:24:48.333 START TEST nvmf_async_init 00:24:48.333 ************************************ 00:24:48.333 13:35:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:48.333 * Looking for test storage... 00:24:48.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.333 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:48.333 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lcov --version 00:24:48.333 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.591 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:48.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.592 --rc genhtml_branch_coverage=1 00:24:48.592 --rc genhtml_function_coverage=1 00:24:48.592 --rc genhtml_legend=1 00:24:48.592 --rc geninfo_all_blocks=1 00:24:48.592 --rc geninfo_unexecuted_blocks=1 00:24:48.592 00:24:48.592 ' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:48.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.592 --rc genhtml_branch_coverage=1 00:24:48.592 --rc genhtml_function_coverage=1 00:24:48.592 --rc genhtml_legend=1 00:24:48.592 --rc geninfo_all_blocks=1 00:24:48.592 --rc geninfo_unexecuted_blocks=1 00:24:48.592 00:24:48.592 ' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:48.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.592 --rc genhtml_branch_coverage=1 00:24:48.592 --rc genhtml_function_coverage=1 00:24:48.592 --rc genhtml_legend=1 00:24:48.592 --rc geninfo_all_blocks=1 00:24:48.592 --rc geninfo_unexecuted_blocks=1 00:24:48.592 00:24:48.592 ' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:48.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.592 --rc genhtml_branch_coverage=1 00:24:48.592 --rc genhtml_function_coverage=1 00:24:48.592 --rc genhtml_legend=1 00:24:48.592 --rc geninfo_all_blocks=1 00:24:48.592 --rc geninfo_unexecuted_blocks=1 00:24:48.592 00:24:48.592 ' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:48.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=b0a443479c084f81b0a8b5b6f69c194f 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:48.592 13:35:30 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.496 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:50.497 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:50.497 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:50.497 Found net devices under 0000:09:00.0: cvl_0_0 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:50.497 Found net devices under 0000:09:00.1: cvl_0_1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # is_hw=yes 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.497 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.756 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.756 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:24:50.757 00:24:50.757 --- 10.0.0.2 ping statistics --- 00:24:50.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.757 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:24:50.757 00:24:50.757 --- 10.0.0.1 ping statistics --- 00:24:50.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.757 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # return 0 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # nvmfpid=1860558 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # waitforlisten 1860558 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 1860558 ']' 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:50.757 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:50.757 [2024-10-07 13:35:32.342259] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:50.757 [2024-10-07 13:35:32.342356] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.757 [2024-10-07 13:35:32.403749] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.017 [2024-10-07 13:35:32.517049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:51.017 [2024-10-07 13:35:32.517105] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:51.017 [2024-10-07 13:35:32.517127] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:51.017 [2024-10-07 13:35:32.517138] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:51.017 [2024-10-07 13:35:32.517147] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:51.017 [2024-10-07 13:35:32.517702] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 [2024-10-07 13:35:32.661602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 null0 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b0a443479c084f81b0a8b5b6f69c194f 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.017 [2024-10-07 13:35:32.701955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.017 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.277 nvme0n1 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.277 [ 00:24:51.277 { 00:24:51.277 "name": "nvme0n1", 00:24:51.277 "aliases": [ 00:24:51.277 "b0a44347-9c08-4f81-b0a8-b5b6f69c194f" 00:24:51.277 ], 00:24:51.277 "product_name": "NVMe disk", 00:24:51.277 "block_size": 512, 00:24:51.277 "num_blocks": 2097152, 00:24:51.277 "uuid": "b0a44347-9c08-4f81-b0a8-b5b6f69c194f", 00:24:51.277 "numa_id": 0, 00:24:51.277 "assigned_rate_limits": { 00:24:51.277 "rw_ios_per_sec": 0, 00:24:51.277 "rw_mbytes_per_sec": 0, 00:24:51.277 "r_mbytes_per_sec": 0, 00:24:51.277 "w_mbytes_per_sec": 0 00:24:51.277 }, 00:24:51.277 "claimed": false, 00:24:51.277 "zoned": false, 00:24:51.277 "supported_io_types": { 00:24:51.277 "read": true, 00:24:51.277 "write": true, 00:24:51.277 "unmap": false, 00:24:51.277 "flush": true, 00:24:51.277 "reset": true, 00:24:51.277 "nvme_admin": true, 00:24:51.277 "nvme_io": true, 00:24:51.277 "nvme_io_md": false, 00:24:51.277 "write_zeroes": true, 00:24:51.277 "zcopy": false, 00:24:51.277 "get_zone_info": false, 00:24:51.277 "zone_management": false, 00:24:51.277 "zone_append": false, 00:24:51.277 "compare": true, 00:24:51.277 "compare_and_write": true, 00:24:51.277 "abort": true, 00:24:51.277 "seek_hole": false, 00:24:51.277 "seek_data": false, 00:24:51.277 "copy": true, 00:24:51.277 "nvme_iov_md": false 00:24:51.277 }, 00:24:51.277 "memory_domains": [ 00:24:51.277 { 00:24:51.277 "dma_device_id": "system", 00:24:51.277 "dma_device_type": 1 00:24:51.277 } 00:24:51.277 ], 00:24:51.277 "driver_specific": { 00:24:51.277 "nvme": [ 00:24:51.277 { 00:24:51.277 "trid": { 00:24:51.277 "trtype": "TCP", 00:24:51.277 "adrfam": "IPv4", 00:24:51.277 "traddr": "10.0.0.2", 00:24:51.277 "trsvcid": "4420", 00:24:51.277 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:51.277 }, 00:24:51.277 "ctrlr_data": { 00:24:51.277 "cntlid": 1, 00:24:51.277 "vendor_id": "0x8086", 00:24:51.277 "model_number": "SPDK bdev Controller", 00:24:51.277 "serial_number": "00000000000000000000", 00:24:51.277 "firmware_revision": "25.01", 00:24:51.277 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:51.277 "oacs": { 00:24:51.277 "security": 0, 00:24:51.277 "format": 0, 00:24:51.277 "firmware": 0, 00:24:51.277 "ns_manage": 0 00:24:51.277 }, 00:24:51.277 "multi_ctrlr": true, 00:24:51.277 "ana_reporting": false 00:24:51.277 }, 00:24:51.277 "vs": { 00:24:51.277 "nvme_version": "1.3" 00:24:51.277 }, 00:24:51.277 "ns_data": { 00:24:51.277 "id": 1, 00:24:51.277 "can_share": true 00:24:51.277 } 00:24:51.277 } 00:24:51.277 ], 00:24:51.277 "mp_policy": "active_passive" 00:24:51.277 } 00:24:51.277 } 00:24:51.277 ] 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.277 13:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.277 [2024-10-07 13:35:32.950356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:51.277 [2024-10-07 13:35:32.950459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf82690 (9): Bad file descriptor 00:24:51.536 [2024-10-07 13:35:33.082816] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.536 [ 00:24:51.536 { 00:24:51.536 "name": "nvme0n1", 00:24:51.536 "aliases": [ 00:24:51.536 "b0a44347-9c08-4f81-b0a8-b5b6f69c194f" 00:24:51.536 ], 00:24:51.536 "product_name": "NVMe disk", 00:24:51.536 "block_size": 512, 00:24:51.536 "num_blocks": 2097152, 00:24:51.536 "uuid": "b0a44347-9c08-4f81-b0a8-b5b6f69c194f", 00:24:51.536 "numa_id": 0, 00:24:51.536 "assigned_rate_limits": { 00:24:51.536 "rw_ios_per_sec": 0, 00:24:51.536 "rw_mbytes_per_sec": 0, 00:24:51.536 "r_mbytes_per_sec": 0, 00:24:51.536 "w_mbytes_per_sec": 0 00:24:51.536 }, 00:24:51.536 "claimed": false, 00:24:51.536 "zoned": false, 00:24:51.536 "supported_io_types": { 00:24:51.536 "read": true, 00:24:51.536 "write": true, 00:24:51.536 "unmap": false, 00:24:51.536 "flush": true, 00:24:51.536 "reset": true, 00:24:51.536 "nvme_admin": true, 00:24:51.536 "nvme_io": true, 00:24:51.536 "nvme_io_md": false, 00:24:51.536 "write_zeroes": true, 00:24:51.536 "zcopy": false, 00:24:51.536 "get_zone_info": false, 00:24:51.536 "zone_management": false, 00:24:51.536 "zone_append": false, 00:24:51.536 "compare": true, 00:24:51.536 "compare_and_write": true, 00:24:51.536 "abort": true, 00:24:51.536 "seek_hole": false, 00:24:51.536 "seek_data": false, 00:24:51.536 "copy": true, 00:24:51.536 "nvme_iov_md": false 00:24:51.536 }, 00:24:51.536 "memory_domains": [ 00:24:51.536 { 00:24:51.536 "dma_device_id": "system", 00:24:51.536 "dma_device_type": 1 00:24:51.536 } 00:24:51.536 ], 00:24:51.536 "driver_specific": { 00:24:51.536 "nvme": [ 00:24:51.536 { 00:24:51.536 "trid": { 00:24:51.536 "trtype": "TCP", 00:24:51.536 "adrfam": "IPv4", 00:24:51.536 "traddr": "10.0.0.2", 00:24:51.536 "trsvcid": "4420", 00:24:51.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:51.536 }, 00:24:51.536 "ctrlr_data": { 00:24:51.536 "cntlid": 2, 00:24:51.536 "vendor_id": "0x8086", 00:24:51.536 "model_number": "SPDK bdev Controller", 00:24:51.536 "serial_number": "00000000000000000000", 00:24:51.536 "firmware_revision": "25.01", 00:24:51.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:51.536 "oacs": { 00:24:51.536 "security": 0, 00:24:51.536 "format": 0, 00:24:51.536 "firmware": 0, 00:24:51.536 "ns_manage": 0 00:24:51.536 }, 00:24:51.536 "multi_ctrlr": true, 00:24:51.536 "ana_reporting": false 00:24:51.536 }, 00:24:51.536 "vs": { 00:24:51.536 "nvme_version": "1.3" 00:24:51.536 }, 00:24:51.536 "ns_data": { 00:24:51.536 "id": 1, 00:24:51.536 "can_share": true 00:24:51.536 } 00:24:51.536 } 00:24:51.536 ], 00:24:51.536 "mp_policy": "active_passive" 00:24:51.536 } 00:24:51.536 } 00:24:51.536 ] 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4dpMiJhCN5 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4dpMiJhCN5 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4dpMiJhCN5 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.536 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.537 [2024-10-07 13:35:33.143045] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:51.537 [2024-10-07 13:35:33.143195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.537 [2024-10-07 13:35:33.159081] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:51.537 nvme0n1 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.537 [ 00:24:51.537 { 00:24:51.537 "name": "nvme0n1", 00:24:51.537 "aliases": [ 00:24:51.537 "b0a44347-9c08-4f81-b0a8-b5b6f69c194f" 00:24:51.537 ], 00:24:51.537 "product_name": "NVMe disk", 00:24:51.537 "block_size": 512, 00:24:51.537 "num_blocks": 2097152, 00:24:51.537 "uuid": "b0a44347-9c08-4f81-b0a8-b5b6f69c194f", 00:24:51.537 "numa_id": 0, 00:24:51.537 "assigned_rate_limits": { 00:24:51.537 "rw_ios_per_sec": 0, 00:24:51.537 "rw_mbytes_per_sec": 0, 00:24:51.537 "r_mbytes_per_sec": 0, 00:24:51.537 "w_mbytes_per_sec": 0 00:24:51.537 }, 00:24:51.537 "claimed": false, 00:24:51.537 "zoned": false, 00:24:51.537 "supported_io_types": { 00:24:51.537 "read": true, 00:24:51.537 "write": true, 00:24:51.537 "unmap": false, 00:24:51.537 "flush": true, 00:24:51.537 "reset": true, 00:24:51.537 "nvme_admin": true, 00:24:51.537 "nvme_io": true, 00:24:51.537 "nvme_io_md": false, 00:24:51.537 "write_zeroes": true, 00:24:51.537 "zcopy": false, 00:24:51.537 "get_zone_info": false, 00:24:51.537 "zone_management": false, 00:24:51.537 "zone_append": false, 00:24:51.537 "compare": true, 00:24:51.537 "compare_and_write": true, 00:24:51.537 "abort": true, 00:24:51.537 "seek_hole": false, 00:24:51.537 "seek_data": false, 00:24:51.537 "copy": true, 00:24:51.537 "nvme_iov_md": false 00:24:51.537 }, 00:24:51.537 "memory_domains": [ 00:24:51.537 { 00:24:51.537 "dma_device_id": "system", 00:24:51.537 "dma_device_type": 1 00:24:51.537 } 00:24:51.537 ], 00:24:51.537 "driver_specific": { 00:24:51.537 "nvme": [ 00:24:51.537 { 00:24:51.537 "trid": { 00:24:51.537 "trtype": "TCP", 00:24:51.537 "adrfam": "IPv4", 00:24:51.537 "traddr": "10.0.0.2", 00:24:51.537 "trsvcid": "4421", 00:24:51.537 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:51.537 }, 00:24:51.537 "ctrlr_data": { 00:24:51.537 "cntlid": 3, 00:24:51.537 "vendor_id": "0x8086", 00:24:51.537 "model_number": "SPDK bdev Controller", 00:24:51.537 "serial_number": "00000000000000000000", 00:24:51.537 "firmware_revision": "25.01", 00:24:51.537 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:51.537 "oacs": { 00:24:51.537 "security": 0, 00:24:51.537 "format": 0, 00:24:51.537 "firmware": 0, 00:24:51.537 "ns_manage": 0 00:24:51.537 }, 00:24:51.537 "multi_ctrlr": true, 00:24:51.537 "ana_reporting": false 00:24:51.537 }, 00:24:51.537 "vs": { 00:24:51.537 "nvme_version": "1.3" 00:24:51.537 }, 00:24:51.537 "ns_data": { 00:24:51.537 "id": 1, 00:24:51.537 "can_share": true 00:24:51.537 } 00:24:51.537 } 00:24:51.537 ], 00:24:51.537 "mp_policy": "active_passive" 00:24:51.537 } 00:24:51.537 } 00:24:51.537 ] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.537 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4dpMiJhCN5 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:51.796 rmmod nvme_tcp 00:24:51.796 rmmod nvme_fabrics 00:24:51.796 rmmod nvme_keyring 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@515 -- # '[' -n 1860558 ']' 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # killprocess 1860558 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 1860558 ']' 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 1860558 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1860558 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1860558' 00:24:51.796 killing process with pid 1860558 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 1860558 00:24:51.796 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 1860558 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-save 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@789 -- # iptables-restore 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.056 13:35:33 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.961 00:24:53.961 real 0m5.660s 00:24:53.961 user 0m2.205s 00:24:53.961 sys 0m1.874s 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:53.961 ************************************ 00:24:53.961 END TEST nvmf_async_init 00:24:53.961 ************************************ 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:53.961 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.220 ************************************ 00:24:54.220 START TEST dma 00:24:54.220 ************************************ 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:54.220 * Looking for test storage... 00:24:54.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lcov --version 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.220 --rc genhtml_branch_coverage=1 00:24:54.220 --rc genhtml_function_coverage=1 00:24:54.220 --rc genhtml_legend=1 00:24:54.220 --rc geninfo_all_blocks=1 00:24:54.220 --rc geninfo_unexecuted_blocks=1 00:24:54.220 00:24:54.220 ' 00:24:54.220 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:54.220 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.220 --rc genhtml_branch_coverage=1 00:24:54.220 --rc genhtml_function_coverage=1 00:24:54.220 --rc genhtml_legend=1 00:24:54.221 --rc geninfo_all_blocks=1 00:24:54.221 --rc geninfo_unexecuted_blocks=1 00:24:54.221 00:24:54.221 ' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.221 --rc genhtml_branch_coverage=1 00:24:54.221 --rc genhtml_function_coverage=1 00:24:54.221 --rc genhtml_legend=1 00:24:54.221 --rc geninfo_all_blocks=1 00:24:54.221 --rc geninfo_unexecuted_blocks=1 00:24:54.221 00:24:54.221 ' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:54.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.221 --rc genhtml_branch_coverage=1 00:24:54.221 --rc genhtml_function_coverage=1 00:24:54.221 --rc genhtml_legend=1 00:24:54.221 --rc geninfo_all_blocks=1 00:24:54.221 --rc geninfo_unexecuted_blocks=1 00:24:54.221 00:24:54.221 ' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:54.221 00:24:54.221 real 0m0.171s 00:24:54.221 user 0m0.111s 00:24:54.221 sys 0m0.070s 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:54.221 ************************************ 00:24:54.221 END TEST dma 00:24:54.221 ************************************ 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.221 ************************************ 00:24:54.221 START TEST nvmf_identify 00:24:54.221 ************************************ 00:24:54.221 13:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:54.479 * Looking for test storage... 00:24:54.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.479 13:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:54.479 13:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:24:54.479 13:35:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:54.479 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:54.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.480 --rc genhtml_branch_coverage=1 00:24:54.480 --rc genhtml_function_coverage=1 00:24:54.480 --rc genhtml_legend=1 00:24:54.480 --rc geninfo_all_blocks=1 00:24:54.480 --rc geninfo_unexecuted_blocks=1 00:24:54.480 00:24:54.480 ' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:54.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.480 --rc genhtml_branch_coverage=1 00:24:54.480 --rc genhtml_function_coverage=1 00:24:54.480 --rc genhtml_legend=1 00:24:54.480 --rc geninfo_all_blocks=1 00:24:54.480 --rc geninfo_unexecuted_blocks=1 00:24:54.480 00:24:54.480 ' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:54.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.480 --rc genhtml_branch_coverage=1 00:24:54.480 --rc genhtml_function_coverage=1 00:24:54.480 --rc genhtml_legend=1 00:24:54.480 --rc geninfo_all_blocks=1 00:24:54.480 --rc geninfo_unexecuted_blocks=1 00:24:54.480 00:24:54.480 ' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:54.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:54.480 --rc genhtml_branch_coverage=1 00:24:54.480 --rc genhtml_function_coverage=1 00:24:54.480 --rc genhtml_legend=1 00:24:54.480 --rc geninfo_all_blocks=1 00:24:54.480 --rc geninfo_unexecuted_blocks=1 00:24:54.480 00:24:54.480 ' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:54.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # prepare_net_devs 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # local -g is_hw=no 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # remove_spdk_ns 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:54.480 13:35:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:24:57.011 Found 0000:09:00.0 (0x8086 - 0x1592) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:24:57.011 Found 0000:09:00.1 (0x8086 - 0x1592) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:57.011 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:57.012 Found net devices under 0000:09:00.0: cvl_0_0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ up == up ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:57.012 Found net devices under 0000:09:00.1: cvl_0_1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # is_hw=yes 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:24:57.012 00:24:57.012 --- 10.0.0.2 ping statistics --- 00:24:57.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.012 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:24:57.012 00:24:57.012 --- 10.0.0.1 ping statistics --- 00:24:57.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.012 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # return 0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1862716 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1862716 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 1862716 ']' 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 [2024-10-07 13:35:38.316183] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:57.012 [2024-10-07 13:35:38.316264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.012 [2024-10-07 13:35:38.386176] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.012 [2024-10-07 13:35:38.501173] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.012 [2024-10-07 13:35:38.501248] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.012 [2024-10-07 13:35:38.501262] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.012 [2024-10-07 13:35:38.501273] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.012 [2024-10-07 13:35:38.501282] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.012 [2024-10-07 13:35:38.503067] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.012 [2024-10-07 13:35:38.503131] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.012 [2024-10-07 13:35:38.503153] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.012 [2024-10-07 13:35:38.503156] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 [2024-10-07 13:35:38.644897] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 Malloc0 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.012 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.012 [2024-10-07 13:35:38.722359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.274 [ 00:24:57.274 { 00:24:57.274 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:57.274 "subtype": "Discovery", 00:24:57.274 "listen_addresses": [ 00:24:57.274 { 00:24:57.274 "trtype": "TCP", 00:24:57.274 "adrfam": "IPv4", 00:24:57.274 "traddr": "10.0.0.2", 00:24:57.274 "trsvcid": "4420" 00:24:57.274 } 00:24:57.274 ], 00:24:57.274 "allow_any_host": true, 00:24:57.274 "hosts": [] 00:24:57.274 }, 00:24:57.274 { 00:24:57.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.274 "subtype": "NVMe", 00:24:57.274 "listen_addresses": [ 00:24:57.274 { 00:24:57.274 "trtype": "TCP", 00:24:57.274 "adrfam": "IPv4", 00:24:57.274 "traddr": "10.0.0.2", 00:24:57.274 "trsvcid": "4420" 00:24:57.274 } 00:24:57.274 ], 00:24:57.274 "allow_any_host": true, 00:24:57.274 "hosts": [], 00:24:57.274 "serial_number": "SPDK00000000000001", 00:24:57.274 "model_number": "SPDK bdev Controller", 00:24:57.274 "max_namespaces": 32, 00:24:57.274 "min_cntlid": 1, 00:24:57.274 "max_cntlid": 65519, 00:24:57.274 "namespaces": [ 00:24:57.274 { 00:24:57.274 "nsid": 1, 00:24:57.274 "bdev_name": "Malloc0", 00:24:57.274 "name": "Malloc0", 00:24:57.274 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:57.274 "eui64": "ABCDEF0123456789", 00:24:57.274 "uuid": "b60bec75-10fa-4f9b-af3d-425d29396079" 00:24:57.274 } 00:24:57.274 ] 00:24:57.274 } 00:24:57.274 ] 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.274 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:57.274 [2024-10-07 13:35:38.765337] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:57.274 [2024-10-07 13:35:38.765387] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862746 ] 00:24:57.274 [2024-10-07 13:35:38.797971] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:57.274 [2024-10-07 13:35:38.798047] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:57.274 [2024-10-07 13:35:38.798058] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:57.274 [2024-10-07 13:35:38.798075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:57.274 [2024-10-07 13:35:38.798089] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:57.274 [2024-10-07 13:35:38.802097] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:57.274 [2024-10-07 13:35:38.802167] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x958760 0 00:24:57.274 [2024-10-07 13:35:38.809694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:57.274 [2024-10-07 13:35:38.809716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:57.274 [2024-10-07 13:35:38.809725] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:57.274 [2024-10-07 13:35:38.809731] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:57.274 [2024-10-07 13:35:38.809782] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.809796] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.809803] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.809821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:57.274 [2024-10-07 13:35:38.809848] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.816679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.816698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.816705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.816713] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.816729] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:57.274 [2024-10-07 13:35:38.816755] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:57.274 [2024-10-07 13:35:38.816765] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:57.274 [2024-10-07 13:35:38.816788] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.816798] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.816805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.816816] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.816842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.816969] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.816983] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.816991] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.816997] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.817007] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:57.274 [2024-10-07 13:35:38.817027] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:57.274 [2024-10-07 13:35:38.817040] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817048] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.817065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.817087] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.817166] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.817178] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.817185] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817192] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.817201] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:57.274 [2024-10-07 13:35:38.817214] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817233] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817240] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.817250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.817271] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.817350] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.817364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.817371] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.817387] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817402] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817411] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.817428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.817449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.817521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.817533] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.817540] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817546] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.817554] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:57.274 [2024-10-07 13:35:38.817563] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817580] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817691] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:57.274 [2024-10-07 13:35:38.817701] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817715] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817723] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.817740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.817762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.817863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.817876] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.817882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.817897] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:57.274 [2024-10-07 13:35:38.817913] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817922] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.817928] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.817939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.817959] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.818035] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.818049] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.818056] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818062] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.818070] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:57.274 [2024-10-07 13:35:38.818078] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:57.274 [2024-10-07 13:35:38.818091] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:57.274 [2024-10-07 13:35:38.818105] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:57.274 [2024-10-07 13:35:38.818121] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.818140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.274 [2024-10-07 13:35:38.818161] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.818292] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.274 [2024-10-07 13:35:38.818304] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.274 [2024-10-07 13:35:38.818315] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818322] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x958760): datao=0, datal=4096, cccid=0 00:24:57.274 [2024-10-07 13:35:38.818331] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b8480) on tqpair(0x958760): expected_datao=0, payload_size=4096 00:24:57.274 [2024-10-07 13:35:38.818338] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818349] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818357] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.818379] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.818385] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818392] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.818404] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:57.274 [2024-10-07 13:35:38.818413] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:57.274 [2024-10-07 13:35:38.818420] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:57.274 [2024-10-07 13:35:38.818428] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:57.274 [2024-10-07 13:35:38.818435] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:57.274 [2024-10-07 13:35:38.818443] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:57.274 [2024-10-07 13:35:38.818457] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:57.274 [2024-10-07 13:35:38.818469] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818477] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818483] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.274 [2024-10-07 13:35:38.818494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:57.274 [2024-10-07 13:35:38.818515] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.274 [2024-10-07 13:35:38.818608] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.274 [2024-10-07 13:35:38.818622] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.274 [2024-10-07 13:35:38.818628] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818635] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.274 [2024-10-07 13:35:38.818647] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818654] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.274 [2024-10-07 13:35:38.818661] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.818678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.275 [2024-10-07 13:35:38.818690] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818697] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.818712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.275 [2024-10-07 13:35:38.818727] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818735] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818741] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.818749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.275 [2024-10-07 13:35:38.818759] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818772] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.818781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.275 [2024-10-07 13:35:38.818790] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:57.275 [2024-10-07 13:35:38.818809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:57.275 [2024-10-07 13:35:38.818822] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.818829] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.818840] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.275 [2024-10-07 13:35:38.818862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8480, cid 0, qid 0 00:24:57.275 [2024-10-07 13:35:38.818873] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8600, cid 1, qid 0 00:24:57.275 [2024-10-07 13:35:38.818881] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8780, cid 2, qid 0 00:24:57.275 [2024-10-07 13:35:38.818888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.275 [2024-10-07 13:35:38.818896] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8a80, cid 4, qid 0 00:24:57.275 [2024-10-07 13:35:38.819037] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.819051] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.819058] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.819064] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8a80) on tqpair=0x958760 00:24:57.275 [2024-10-07 13:35:38.819073] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:57.275 [2024-10-07 13:35:38.819082] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:57.275 [2024-10-07 13:35:38.819100] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.819109] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.819120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.275 [2024-10-07 13:35:38.819141] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8a80, cid 4, qid 0 00:24:57.275 [2024-10-07 13:35:38.819228] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.275 [2024-10-07 13:35:38.819239] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.275 [2024-10-07 13:35:38.819246] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.819252] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x958760): datao=0, datal=4096, cccid=4 00:24:57.275 [2024-10-07 13:35:38.819260] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b8a80) on tqpair(0x958760): expected_datao=0, payload_size=4096 00:24:57.275 [2024-10-07 13:35:38.819271] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.819288] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.819297] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.859764] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.859783] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.859790] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.859797] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8a80) on tqpair=0x958760 00:24:57.275 [2024-10-07 13:35:38.859817] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:57.275 [2024-10-07 13:35:38.859861] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.859874] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.859885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.275 [2024-10-07 13:35:38.859897] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.859904] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.859911] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.859920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.275 [2024-10-07 13:35:38.859943] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8a80, cid 4, qid 0 00:24:57.275 [2024-10-07 13:35:38.859955] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8c00, cid 5, qid 0 00:24:57.275 [2024-10-07 13:35:38.860078] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.275 [2024-10-07 13:35:38.860091] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.275 [2024-10-07 13:35:38.860097] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.860104] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x958760): datao=0, datal=1024, cccid=4 00:24:57.275 [2024-10-07 13:35:38.860112] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b8a80) on tqpair(0x958760): expected_datao=0, payload_size=1024 00:24:57.275 [2024-10-07 13:35:38.860119] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.860129] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.860136] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.860144] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.860153] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.860160] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.860166] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8c00) on tqpair=0x958760 00:24:57.275 [2024-10-07 13:35:38.904679] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.904698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.904705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.904712] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8a80) on tqpair=0x958760 00:24:57.275 [2024-10-07 13:35:38.904736] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.904747] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.904759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.275 [2024-10-07 13:35:38.904808] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8a80, cid 4, qid 0 00:24:57.275 [2024-10-07 13:35:38.904947] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.275 [2024-10-07 13:35:38.904962] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.275 [2024-10-07 13:35:38.904969] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.904975] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x958760): datao=0, datal=3072, cccid=4 00:24:57.275 [2024-10-07 13:35:38.904983] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b8a80) on tqpair(0x958760): expected_datao=0, payload_size=3072 00:24:57.275 [2024-10-07 13:35:38.904991] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905001] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905008] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905044] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.905056] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.905063] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905070] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8a80) on tqpair=0x958760 00:24:57.275 [2024-10-07 13:35:38.905085] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905094] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x958760) 00:24:57.275 [2024-10-07 13:35:38.905104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.275 [2024-10-07 13:35:38.905132] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8a80, cid 4, qid 0 00:24:57.275 [2024-10-07 13:35:38.905233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.275 [2024-10-07 13:35:38.905245] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.275 [2024-10-07 13:35:38.905252] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905258] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x958760): datao=0, datal=8, cccid=4 00:24:57.275 [2024-10-07 13:35:38.905265] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x9b8a80) on tqpair(0x958760): expected_datao=0, payload_size=8 00:24:57.275 [2024-10-07 13:35:38.905273] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905282] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.905289] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.945770] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.275 [2024-10-07 13:35:38.945789] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.275 [2024-10-07 13:35:38.945796] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.275 [2024-10-07 13:35:38.945804] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8a80) on tqpair=0x958760 00:24:57.275 ===================================================== 00:24:57.275 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:57.275 ===================================================== 00:24:57.275 Controller Capabilities/Features 00:24:57.275 ================================ 00:24:57.275 Vendor ID: 0000 00:24:57.275 Subsystem Vendor ID: 0000 00:24:57.275 Serial Number: .................... 00:24:57.275 Model Number: ........................................ 00:24:57.275 Firmware Version: 25.01 00:24:57.275 Recommended Arb Burst: 0 00:24:57.275 IEEE OUI Identifier: 00 00 00 00:24:57.275 Multi-path I/O 00:24:57.275 May have multiple subsystem ports: No 00:24:57.275 May have multiple controllers: No 00:24:57.275 Associated with SR-IOV VF: No 00:24:57.275 Max Data Transfer Size: 131072 00:24:57.275 Max Number of Namespaces: 0 00:24:57.275 Max Number of I/O Queues: 1024 00:24:57.275 NVMe Specification Version (VS): 1.3 00:24:57.275 NVMe Specification Version (Identify): 1.3 00:24:57.275 Maximum Queue Entries: 128 00:24:57.275 Contiguous Queues Required: Yes 00:24:57.275 Arbitration Mechanisms Supported 00:24:57.275 Weighted Round Robin: Not Supported 00:24:57.275 Vendor Specific: Not Supported 00:24:57.275 Reset Timeout: 15000 ms 00:24:57.275 Doorbell Stride: 4 bytes 00:24:57.275 NVM Subsystem Reset: Not Supported 00:24:57.275 Command Sets Supported 00:24:57.275 NVM Command Set: Supported 00:24:57.275 Boot Partition: Not Supported 00:24:57.275 Memory Page Size Minimum: 4096 bytes 00:24:57.275 Memory Page Size Maximum: 4096 bytes 00:24:57.275 Persistent Memory Region: Not Supported 00:24:57.275 Optional Asynchronous Events Supported 00:24:57.275 Namespace Attribute Notices: Not Supported 00:24:57.275 Firmware Activation Notices: Not Supported 00:24:57.275 ANA Change Notices: Not Supported 00:24:57.275 PLE Aggregate Log Change Notices: Not Supported 00:24:57.275 LBA Status Info Alert Notices: Not Supported 00:24:57.275 EGE Aggregate Log Change Notices: Not Supported 00:24:57.275 Normal NVM Subsystem Shutdown event: Not Supported 00:24:57.275 Zone Descriptor Change Notices: Not Supported 00:24:57.275 Discovery Log Change Notices: Supported 00:24:57.276 Controller Attributes 00:24:57.276 128-bit Host Identifier: Not Supported 00:24:57.276 Non-Operational Permissive Mode: Not Supported 00:24:57.276 NVM Sets: Not Supported 00:24:57.276 Read Recovery Levels: Not Supported 00:24:57.276 Endurance Groups: Not Supported 00:24:57.276 Predictable Latency Mode: Not Supported 00:24:57.276 Traffic Based Keep ALive: Not Supported 00:24:57.276 Namespace Granularity: Not Supported 00:24:57.276 SQ Associations: Not Supported 00:24:57.276 UUID List: Not Supported 00:24:57.276 Multi-Domain Subsystem: Not Supported 00:24:57.276 Fixed Capacity Management: Not Supported 00:24:57.276 Variable Capacity Management: Not Supported 00:24:57.276 Delete Endurance Group: Not Supported 00:24:57.276 Delete NVM Set: Not Supported 00:24:57.276 Extended LBA Formats Supported: Not Supported 00:24:57.276 Flexible Data Placement Supported: Not Supported 00:24:57.276 00:24:57.276 Controller Memory Buffer Support 00:24:57.276 ================================ 00:24:57.276 Supported: No 00:24:57.276 00:24:57.276 Persistent Memory Region Support 00:24:57.276 ================================ 00:24:57.276 Supported: No 00:24:57.276 00:24:57.276 Admin Command Set Attributes 00:24:57.276 ============================ 00:24:57.276 Security Send/Receive: Not Supported 00:24:57.276 Format NVM: Not Supported 00:24:57.276 Firmware Activate/Download: Not Supported 00:24:57.276 Namespace Management: Not Supported 00:24:57.276 Device Self-Test: Not Supported 00:24:57.276 Directives: Not Supported 00:24:57.276 NVMe-MI: Not Supported 00:24:57.276 Virtualization Management: Not Supported 00:24:57.276 Doorbell Buffer Config: Not Supported 00:24:57.276 Get LBA Status Capability: Not Supported 00:24:57.276 Command & Feature Lockdown Capability: Not Supported 00:24:57.276 Abort Command Limit: 1 00:24:57.276 Async Event Request Limit: 4 00:24:57.276 Number of Firmware Slots: N/A 00:24:57.276 Firmware Slot 1 Read-Only: N/A 00:24:57.276 Firmware Activation Without Reset: N/A 00:24:57.276 Multiple Update Detection Support: N/A 00:24:57.276 Firmware Update Granularity: No Information Provided 00:24:57.276 Per-Namespace SMART Log: No 00:24:57.276 Asymmetric Namespace Access Log Page: Not Supported 00:24:57.276 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:57.276 Command Effects Log Page: Not Supported 00:24:57.276 Get Log Page Extended Data: Supported 00:24:57.276 Telemetry Log Pages: Not Supported 00:24:57.276 Persistent Event Log Pages: Not Supported 00:24:57.276 Supported Log Pages Log Page: May Support 00:24:57.276 Commands Supported & Effects Log Page: Not Supported 00:24:57.276 Feature Identifiers & Effects Log Page:May Support 00:24:57.276 NVMe-MI Commands & Effects Log Page: May Support 00:24:57.276 Data Area 4 for Telemetry Log: Not Supported 00:24:57.276 Error Log Page Entries Supported: 128 00:24:57.276 Keep Alive: Not Supported 00:24:57.276 00:24:57.276 NVM Command Set Attributes 00:24:57.276 ========================== 00:24:57.276 Submission Queue Entry Size 00:24:57.276 Max: 1 00:24:57.276 Min: 1 00:24:57.276 Completion Queue Entry Size 00:24:57.276 Max: 1 00:24:57.276 Min: 1 00:24:57.276 Number of Namespaces: 0 00:24:57.276 Compare Command: Not Supported 00:24:57.276 Write Uncorrectable Command: Not Supported 00:24:57.276 Dataset Management Command: Not Supported 00:24:57.276 Write Zeroes Command: Not Supported 00:24:57.276 Set Features Save Field: Not Supported 00:24:57.276 Reservations: Not Supported 00:24:57.276 Timestamp: Not Supported 00:24:57.276 Copy: Not Supported 00:24:57.276 Volatile Write Cache: Not Present 00:24:57.276 Atomic Write Unit (Normal): 1 00:24:57.276 Atomic Write Unit (PFail): 1 00:24:57.276 Atomic Compare & Write Unit: 1 00:24:57.276 Fused Compare & Write: Supported 00:24:57.276 Scatter-Gather List 00:24:57.276 SGL Command Set: Supported 00:24:57.276 SGL Keyed: Supported 00:24:57.276 SGL Bit Bucket Descriptor: Not Supported 00:24:57.276 SGL Metadata Pointer: Not Supported 00:24:57.276 Oversized SGL: Not Supported 00:24:57.276 SGL Metadata Address: Not Supported 00:24:57.276 SGL Offset: Supported 00:24:57.276 Transport SGL Data Block: Not Supported 00:24:57.276 Replay Protected Memory Block: Not Supported 00:24:57.276 00:24:57.276 Firmware Slot Information 00:24:57.276 ========================= 00:24:57.276 Active slot: 0 00:24:57.276 00:24:57.276 00:24:57.276 Error Log 00:24:57.276 ========= 00:24:57.276 00:24:57.276 Active Namespaces 00:24:57.276 ================= 00:24:57.276 Discovery Log Page 00:24:57.276 ================== 00:24:57.276 Generation Counter: 2 00:24:57.276 Number of Records: 2 00:24:57.276 Record Format: 0 00:24:57.276 00:24:57.276 Discovery Log Entry 0 00:24:57.276 ---------------------- 00:24:57.276 Transport Type: 3 (TCP) 00:24:57.276 Address Family: 1 (IPv4) 00:24:57.276 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:57.276 Entry Flags: 00:24:57.276 Duplicate Returned Information: 1 00:24:57.276 Explicit Persistent Connection Support for Discovery: 1 00:24:57.276 Transport Requirements: 00:24:57.276 Secure Channel: Not Required 00:24:57.276 Port ID: 0 (0x0000) 00:24:57.276 Controller ID: 65535 (0xffff) 00:24:57.276 Admin Max SQ Size: 128 00:24:57.276 Transport Service Identifier: 4420 00:24:57.276 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:57.276 Transport Address: 10.0.0.2 00:24:57.276 Discovery Log Entry 1 00:24:57.276 ---------------------- 00:24:57.276 Transport Type: 3 (TCP) 00:24:57.276 Address Family: 1 (IPv4) 00:24:57.276 Subsystem Type: 2 (NVM Subsystem) 00:24:57.276 Entry Flags: 00:24:57.276 Duplicate Returned Information: 0 00:24:57.276 Explicit Persistent Connection Support for Discovery: 0 00:24:57.276 Transport Requirements: 00:24:57.276 Secure Channel: Not Required 00:24:57.276 Port ID: 0 (0x0000) 00:24:57.276 Controller ID: 65535 (0xffff) 00:24:57.276 Admin Max SQ Size: 128 00:24:57.276 Transport Service Identifier: 4420 00:24:57.276 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:57.276 Transport Address: 10.0.0.2 [2024-10-07 13:35:38.945910] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:57.276 [2024-10-07 13:35:38.945932] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8480) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.945943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.276 [2024-10-07 13:35:38.945953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8600) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.945960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.276 [2024-10-07 13:35:38.945968] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8780) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.945976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.276 [2024-10-07 13:35:38.945988] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.945996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.276 [2024-10-07 13:35:38.946009] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946017] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946024] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.276 [2024-10-07 13:35:38.946035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.276 [2024-10-07 13:35:38.946060] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.276 [2024-10-07 13:35:38.946163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.276 [2024-10-07 13:35:38.946176] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.276 [2024-10-07 13:35:38.946183] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946190] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.946202] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946210] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946216] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.276 [2024-10-07 13:35:38.946226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.276 [2024-10-07 13:35:38.946253] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.276 [2024-10-07 13:35:38.946349] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.276 [2024-10-07 13:35:38.946363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.276 [2024-10-07 13:35:38.946370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.276 [2024-10-07 13:35:38.946385] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:57.276 [2024-10-07 13:35:38.946398] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:57.276 [2024-10-07 13:35:38.946416] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946425] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.276 [2024-10-07 13:35:38.946432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.276 [2024-10-07 13:35:38.946442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.946463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.946537] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.946549] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.946556] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946563] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.946579] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946588] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946595] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.946605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.946630] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.946726] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.946741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.946748] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.946771] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946781] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.946798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.946819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.946891] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.946905] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.946912] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946919] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.946935] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946944] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.946950] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.946961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.946982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947046] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947058] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947065] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947072] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947088] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947097] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947103] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947134] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947204] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947215] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947222] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947229] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947244] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947253] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947290] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947373] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947380] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947387] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947402] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947412] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947418] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947449] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947519] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947532] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947539] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947545] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947561] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947570] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947576] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947607] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947700] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947716] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947723] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947745] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947754] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947761] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947792] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.947863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.947876] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.947882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947889] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.947904] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947914] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.947920] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.947930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.947951] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.948017] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.948036] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.948045] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948051] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.948067] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948076] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948083] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.277 [2024-10-07 13:35:38.948093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.277 [2024-10-07 13:35:38.948114] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.277 [2024-10-07 13:35:38.948188] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.277 [2024-10-07 13:35:38.948202] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.277 [2024-10-07 13:35:38.948209] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948216] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.277 [2024-10-07 13:35:38.948232] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948241] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.277 [2024-10-07 13:35:38.948247] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.278 [2024-10-07 13:35:38.948258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.278 [2024-10-07 13:35:38.948278] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.278 [2024-10-07 13:35:38.948352] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.278 [2024-10-07 13:35:38.948364] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.278 [2024-10-07 13:35:38.948371] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948377] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.278 [2024-10-07 13:35:38.948393] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948402] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948408] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.278 [2024-10-07 13:35:38.948419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.278 [2024-10-07 13:35:38.948439] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.278 [2024-10-07 13:35:38.948508] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.278 [2024-10-07 13:35:38.948522] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.278 [2024-10-07 13:35:38.948528] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948535] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.278 [2024-10-07 13:35:38.948551] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948560] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.948567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.278 [2024-10-07 13:35:38.948577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.278 [2024-10-07 13:35:38.948598] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.278 [2024-10-07 13:35:38.952680] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.278 [2024-10-07 13:35:38.952698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.278 [2024-10-07 13:35:38.952705] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.952716] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.278 [2024-10-07 13:35:38.952735] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.952745] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.952751] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x958760) 00:24:57.278 [2024-10-07 13:35:38.952762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.278 [2024-10-07 13:35:38.952784] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x9b8900, cid 3, qid 0 00:24:57.278 [2024-10-07 13:35:38.952897] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.278 [2024-10-07 13:35:38.952911] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.278 [2024-10-07 13:35:38.952918] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.278 [2024-10-07 13:35:38.952925] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x9b8900) on tqpair=0x958760 00:24:57.278 [2024-10-07 13:35:38.952937] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:57.278 00:24:57.278 13:35:38 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:57.539 [2024-10-07 13:35:38.990051] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:24:57.539 [2024-10-07 13:35:38.990095] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1862750 ] 00:24:57.539 [2024-10-07 13:35:39.025023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:57.539 [2024-10-07 13:35:39.025077] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:57.539 [2024-10-07 13:35:39.025087] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:57.539 [2024-10-07 13:35:39.025103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:57.539 [2024-10-07 13:35:39.025116] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:57.539 [2024-10-07 13:35:39.025632] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:57.539 [2024-10-07 13:35:39.025683] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a26760 0 00:24:57.539 [2024-10-07 13:35:39.035682] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:57.539 [2024-10-07 13:35:39.035702] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:57.539 [2024-10-07 13:35:39.035711] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:57.539 [2024-10-07 13:35:39.035717] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:57.539 [2024-10-07 13:35:39.035754] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.539 [2024-10-07 13:35:39.035766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.539 [2024-10-07 13:35:39.035773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.539 [2024-10-07 13:35:39.035787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:57.539 [2024-10-07 13:35:39.035814] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.041711] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.041741] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.041749] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.041757] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.041771] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:57.540 [2024-10-07 13:35:39.041781] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:57.540 [2024-10-07 13:35:39.041791] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:57.540 [2024-10-07 13:35:39.041810] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.041819] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.041826] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.041837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.041862] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.041988] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.042001] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.042008] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042014] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.042023] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:57.540 [2024-10-07 13:35:39.042035] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:57.540 [2024-10-07 13:35:39.042047] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042054] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042061] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.042071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.042092] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.042170] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.042183] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.042189] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042196] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.042204] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:57.540 [2024-10-07 13:35:39.042217] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042229] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042236] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042242] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.042252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.042273] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.042346] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.042362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.042370] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042376] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.042385] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042401] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042409] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042416] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.042426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.042475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.042527] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.042542] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.042548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042555] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.042562] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:57.540 [2024-10-07 13:35:39.042571] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042584] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042694] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:57.540 [2024-10-07 13:35:39.042703] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042716] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042723] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042730] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.042740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.042762] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.042871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.042883] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.042890] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042896] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.042904] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:57.540 [2024-10-07 13:35:39.042920] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042929] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.042935] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.042946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.042966] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.043041] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.043054] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.043060] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.043067] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.043074] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:57.540 [2024-10-07 13:35:39.043082] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:57.540 [2024-10-07 13:35:39.043095] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:57.540 [2024-10-07 13:35:39.043112] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:57.540 [2024-10-07 13:35:39.043128] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.043136] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.043146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.540 [2024-10-07 13:35:39.043167] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.043285] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.540 [2024-10-07 13:35:39.043300] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.540 [2024-10-07 13:35:39.043307] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.043313] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=4096, cccid=0 00:24:57.540 [2024-10-07 13:35:39.043321] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86480) on tqpair(0x1a26760): expected_datao=0, payload_size=4096 00:24:57.540 [2024-10-07 13:35:39.043328] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.043338] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.043345] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.083784] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.540 [2024-10-07 13:35:39.083804] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.540 [2024-10-07 13:35:39.083811] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.083818] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.540 [2024-10-07 13:35:39.083830] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:57.540 [2024-10-07 13:35:39.083839] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:57.540 [2024-10-07 13:35:39.083847] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:57.540 [2024-10-07 13:35:39.083854] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:57.540 [2024-10-07 13:35:39.083861] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:57.540 [2024-10-07 13:35:39.083869] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:57.540 [2024-10-07 13:35:39.083883] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:57.540 [2024-10-07 13:35:39.083895] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.083902] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.540 [2024-10-07 13:35:39.083909] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.540 [2024-10-07 13:35:39.083925] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:57.540 [2024-10-07 13:35:39.083950] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.540 [2024-10-07 13:35:39.084027] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.084040] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.084046] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084053] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.084064] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084071] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084077] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.541 [2024-10-07 13:35:39.084097] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084104] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084110] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.541 [2024-10-07 13:35:39.084128] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084135] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084141] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.541 [2024-10-07 13:35:39.084159] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084166] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084172] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.541 [2024-10-07 13:35:39.084189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084221] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084228] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084238] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.541 [2024-10-07 13:35:39.084276] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86480, cid 0, qid 0 00:24:57.541 [2024-10-07 13:35:39.084287] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86600, cid 1, qid 0 00:24:57.541 [2024-10-07 13:35:39.084295] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86780, cid 2, qid 0 00:24:57.541 [2024-10-07 13:35:39.084302] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.541 [2024-10-07 13:35:39.084324] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.541 [2024-10-07 13:35:39.084475] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.084491] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.084499] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084505] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.084514] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:57.541 [2024-10-07 13:35:39.084522] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084535] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084551] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084562] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084569] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:57.541 [2024-10-07 13:35:39.084607] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.541 [2024-10-07 13:35:39.084739] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.084754] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.084761] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084768] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.084837] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084858] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.084872] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.084880] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.084890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.541 [2024-10-07 13:35:39.084912] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.541 [2024-10-07 13:35:39.085045] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.541 [2024-10-07 13:35:39.085060] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.541 [2024-10-07 13:35:39.085067] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.085073] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=4096, cccid=4 00:24:57.541 [2024-10-07 13:35:39.085080] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86a80) on tqpair(0x1a26760): expected_datao=0, payload_size=4096 00:24:57.541 [2024-10-07 13:35:39.085087] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.085104] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.085113] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.129676] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.129694] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.129701] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.129708] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.129727] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:57.541 [2024-10-07 13:35:39.129748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.129766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.129779] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.129787] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.129798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.541 [2024-10-07 13:35:39.129821] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.541 [2024-10-07 13:35:39.129975] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.541 [2024-10-07 13:35:39.129990] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.541 [2024-10-07 13:35:39.129997] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.130003] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=4096, cccid=4 00:24:57.541 [2024-10-07 13:35:39.130010] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86a80) on tqpair(0x1a26760): expected_datao=0, payload_size=4096 00:24:57.541 [2024-10-07 13:35:39.130017] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.130035] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.130043] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.170775] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.170793] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.170800] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.170807] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.170829] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.170848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.170863] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.170871] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.541 [2024-10-07 13:35:39.170882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.541 [2024-10-07 13:35:39.170905] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.541 [2024-10-07 13:35:39.170996] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.541 [2024-10-07 13:35:39.171011] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.541 [2024-10-07 13:35:39.171017] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.171024] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=4096, cccid=4 00:24:57.541 [2024-10-07 13:35:39.171031] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86a80) on tqpair(0x1a26760): expected_datao=0, payload_size=4096 00:24:57.541 [2024-10-07 13:35:39.171038] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.171055] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.171064] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.211747] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.541 [2024-10-07 13:35:39.211766] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.541 [2024-10-07 13:35:39.211773] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.541 [2024-10-07 13:35:39.211780] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.541 [2024-10-07 13:35:39.211794] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.211809] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:57.541 [2024-10-07 13:35:39.211825] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:57.542 [2024-10-07 13:35:39.211836] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:57.542 [2024-10-07 13:35:39.211844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:57.542 [2024-10-07 13:35:39.211853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:57.542 [2024-10-07 13:35:39.211861] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:57.542 [2024-10-07 13:35:39.211868] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:57.542 [2024-10-07 13:35:39.211876] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:57.542 [2024-10-07 13:35:39.211896] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.211905] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.211915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.211927] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.211934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.211940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.211949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.542 [2024-10-07 13:35:39.211971] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.542 [2024-10-07 13:35:39.211982] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86c00, cid 5, qid 0 00:24:57.542 [2024-10-07 13:35:39.212071] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.212083] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.212090] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212096] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.212106] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.212115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.212121] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212128] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86c00) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.212143] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212152] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.212162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.212189] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86c00, cid 5, qid 0 00:24:57.542 [2024-10-07 13:35:39.212266] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.212278] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.212285] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212291] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86c00) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.212306] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212315] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.212325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.212345] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86c00, cid 5, qid 0 00:24:57.542 [2024-10-07 13:35:39.212417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.212429] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.212436] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212442] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86c00) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.212458] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212467] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.212477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.212497] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86c00, cid 5, qid 0 00:24:57.542 [2024-10-07 13:35:39.212570] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.212583] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.212590] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212597] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86c00) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.212622] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212633] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.212644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.212657] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.212664] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.216690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.216705] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.216713] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.216723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.216740] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.216749] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a26760) 00:24:57.542 [2024-10-07 13:35:39.216759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.542 [2024-10-07 13:35:39.216787] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86c00, cid 5, qid 0 00:24:57.542 [2024-10-07 13:35:39.216799] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86a80, cid 4, qid 0 00:24:57.542 [2024-10-07 13:35:39.216806] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86d80, cid 6, qid 0 00:24:57.542 [2024-10-07 13:35:39.216813] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86f00, cid 7, qid 0 00:24:57.542 [2024-10-07 13:35:39.217012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.542 [2024-10-07 13:35:39.217027] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.542 [2024-10-07 13:35:39.217034] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217040] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=8192, cccid=5 00:24:57.542 [2024-10-07 13:35:39.217048] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86c00) on tqpair(0x1a26760): expected_datao=0, payload_size=8192 00:24:57.542 [2024-10-07 13:35:39.217055] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217065] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217072] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217080] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.542 [2024-10-07 13:35:39.217089] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.542 [2024-10-07 13:35:39.217095] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217101] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=512, cccid=4 00:24:57.542 [2024-10-07 13:35:39.217108] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86a80) on tqpair(0x1a26760): expected_datao=0, payload_size=512 00:24:57.542 [2024-10-07 13:35:39.217115] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217124] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217131] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217139] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.542 [2024-10-07 13:35:39.217147] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.542 [2024-10-07 13:35:39.217154] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217159] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=512, cccid=6 00:24:57.542 [2024-10-07 13:35:39.217167] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86d80) on tqpair(0x1a26760): expected_datao=0, payload_size=512 00:24:57.542 [2024-10-07 13:35:39.217174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217182] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217189] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217197] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:57.542 [2024-10-07 13:35:39.217205] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:57.542 [2024-10-07 13:35:39.217211] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217217] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a26760): datao=0, datal=4096, cccid=7 00:24:57.542 [2024-10-07 13:35:39.217224] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a86f00) on tqpair(0x1a26760): expected_datao=0, payload_size=4096 00:24:57.542 [2024-10-07 13:35:39.217231] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217240] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217247] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217258] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.217272] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.217279] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217286] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86c00) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.217305] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.217316] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.217322] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217329] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86a80) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.217360] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.217371] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.217377] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.542 [2024-10-07 13:35:39.217383] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86d80) on tqpair=0x1a26760 00:24:57.542 [2024-10-07 13:35:39.217394] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.542 [2024-10-07 13:35:39.217403] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.542 [2024-10-07 13:35:39.217409] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.543 [2024-10-07 13:35:39.217415] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86f00) on tqpair=0x1a26760 00:24:57.543 ===================================================== 00:24:57.543 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:57.543 ===================================================== 00:24:57.543 Controller Capabilities/Features 00:24:57.543 ================================ 00:24:57.543 Vendor ID: 8086 00:24:57.543 Subsystem Vendor ID: 8086 00:24:57.543 Serial Number: SPDK00000000000001 00:24:57.543 Model Number: SPDK bdev Controller 00:24:57.543 Firmware Version: 25.01 00:24:57.543 Recommended Arb Burst: 6 00:24:57.543 IEEE OUI Identifier: e4 d2 5c 00:24:57.543 Multi-path I/O 00:24:57.543 May have multiple subsystem ports: Yes 00:24:57.543 May have multiple controllers: Yes 00:24:57.543 Associated with SR-IOV VF: No 00:24:57.543 Max Data Transfer Size: 131072 00:24:57.543 Max Number of Namespaces: 32 00:24:57.543 Max Number of I/O Queues: 127 00:24:57.543 NVMe Specification Version (VS): 1.3 00:24:57.543 NVMe Specification Version (Identify): 1.3 00:24:57.543 Maximum Queue Entries: 128 00:24:57.543 Contiguous Queues Required: Yes 00:24:57.543 Arbitration Mechanisms Supported 00:24:57.543 Weighted Round Robin: Not Supported 00:24:57.543 Vendor Specific: Not Supported 00:24:57.543 Reset Timeout: 15000 ms 00:24:57.543 Doorbell Stride: 4 bytes 00:24:57.543 NVM Subsystem Reset: Not Supported 00:24:57.543 Command Sets Supported 00:24:57.543 NVM Command Set: Supported 00:24:57.543 Boot Partition: Not Supported 00:24:57.543 Memory Page Size Minimum: 4096 bytes 00:24:57.543 Memory Page Size Maximum: 4096 bytes 00:24:57.543 Persistent Memory Region: Not Supported 00:24:57.543 Optional Asynchronous Events Supported 00:24:57.543 Namespace Attribute Notices: Supported 00:24:57.543 Firmware Activation Notices: Not Supported 00:24:57.543 ANA Change Notices: Not Supported 00:24:57.543 PLE Aggregate Log Change Notices: Not Supported 00:24:57.543 LBA Status Info Alert Notices: Not Supported 00:24:57.543 EGE Aggregate Log Change Notices: Not Supported 00:24:57.543 Normal NVM Subsystem Shutdown event: Not Supported 00:24:57.543 Zone Descriptor Change Notices: Not Supported 00:24:57.543 Discovery Log Change Notices: Not Supported 00:24:57.543 Controller Attributes 00:24:57.543 128-bit Host Identifier: Supported 00:24:57.543 Non-Operational Permissive Mode: Not Supported 00:24:57.543 NVM Sets: Not Supported 00:24:57.543 Read Recovery Levels: Not Supported 00:24:57.543 Endurance Groups: Not Supported 00:24:57.543 Predictable Latency Mode: Not Supported 00:24:57.543 Traffic Based Keep ALive: Not Supported 00:24:57.543 Namespace Granularity: Not Supported 00:24:57.543 SQ Associations: Not Supported 00:24:57.543 UUID List: Not Supported 00:24:57.543 Multi-Domain Subsystem: Not Supported 00:24:57.543 Fixed Capacity Management: Not Supported 00:24:57.543 Variable Capacity Management: Not Supported 00:24:57.543 Delete Endurance Group: Not Supported 00:24:57.543 Delete NVM Set: Not Supported 00:24:57.543 Extended LBA Formats Supported: Not Supported 00:24:57.543 Flexible Data Placement Supported: Not Supported 00:24:57.543 00:24:57.543 Controller Memory Buffer Support 00:24:57.543 ================================ 00:24:57.543 Supported: No 00:24:57.543 00:24:57.543 Persistent Memory Region Support 00:24:57.543 ================================ 00:24:57.543 Supported: No 00:24:57.543 00:24:57.543 Admin Command Set Attributes 00:24:57.543 ============================ 00:24:57.543 Security Send/Receive: Not Supported 00:24:57.543 Format NVM: Not Supported 00:24:57.543 Firmware Activate/Download: Not Supported 00:24:57.543 Namespace Management: Not Supported 00:24:57.543 Device Self-Test: Not Supported 00:24:57.543 Directives: Not Supported 00:24:57.543 NVMe-MI: Not Supported 00:24:57.543 Virtualization Management: Not Supported 00:24:57.543 Doorbell Buffer Config: Not Supported 00:24:57.543 Get LBA Status Capability: Not Supported 00:24:57.543 Command & Feature Lockdown Capability: Not Supported 00:24:57.543 Abort Command Limit: 4 00:24:57.543 Async Event Request Limit: 4 00:24:57.543 Number of Firmware Slots: N/A 00:24:57.543 Firmware Slot 1 Read-Only: N/A 00:24:57.543 Firmware Activation Without Reset: N/A 00:24:57.543 Multiple Update Detection Support: N/A 00:24:57.543 Firmware Update Granularity: No Information Provided 00:24:57.543 Per-Namespace SMART Log: No 00:24:57.543 Asymmetric Namespace Access Log Page: Not Supported 00:24:57.543 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:57.543 Command Effects Log Page: Supported 00:24:57.543 Get Log Page Extended Data: Supported 00:24:57.543 Telemetry Log Pages: Not Supported 00:24:57.543 Persistent Event Log Pages: Not Supported 00:24:57.543 Supported Log Pages Log Page: May Support 00:24:57.543 Commands Supported & Effects Log Page: Not Supported 00:24:57.543 Feature Identifiers & Effects Log Page:May Support 00:24:57.543 NVMe-MI Commands & Effects Log Page: May Support 00:24:57.543 Data Area 4 for Telemetry Log: Not Supported 00:24:57.543 Error Log Page Entries Supported: 128 00:24:57.543 Keep Alive: Supported 00:24:57.543 Keep Alive Granularity: 10000 ms 00:24:57.543 00:24:57.543 NVM Command Set Attributes 00:24:57.543 ========================== 00:24:57.543 Submission Queue Entry Size 00:24:57.543 Max: 64 00:24:57.543 Min: 64 00:24:57.543 Completion Queue Entry Size 00:24:57.543 Max: 16 00:24:57.543 Min: 16 00:24:57.543 Number of Namespaces: 32 00:24:57.543 Compare Command: Supported 00:24:57.543 Write Uncorrectable Command: Not Supported 00:24:57.543 Dataset Management Command: Supported 00:24:57.543 Write Zeroes Command: Supported 00:24:57.543 Set Features Save Field: Not Supported 00:24:57.543 Reservations: Supported 00:24:57.543 Timestamp: Not Supported 00:24:57.543 Copy: Supported 00:24:57.543 Volatile Write Cache: Present 00:24:57.543 Atomic Write Unit (Normal): 1 00:24:57.543 Atomic Write Unit (PFail): 1 00:24:57.543 Atomic Compare & Write Unit: 1 00:24:57.543 Fused Compare & Write: Supported 00:24:57.543 Scatter-Gather List 00:24:57.543 SGL Command Set: Supported 00:24:57.543 SGL Keyed: Supported 00:24:57.543 SGL Bit Bucket Descriptor: Not Supported 00:24:57.543 SGL Metadata Pointer: Not Supported 00:24:57.543 Oversized SGL: Not Supported 00:24:57.543 SGL Metadata Address: Not Supported 00:24:57.543 SGL Offset: Supported 00:24:57.543 Transport SGL Data Block: Not Supported 00:24:57.543 Replay Protected Memory Block: Not Supported 00:24:57.543 00:24:57.543 Firmware Slot Information 00:24:57.543 ========================= 00:24:57.543 Active slot: 1 00:24:57.543 Slot 1 Firmware Revision: 25.01 00:24:57.543 00:24:57.543 00:24:57.543 Commands Supported and Effects 00:24:57.543 ============================== 00:24:57.543 Admin Commands 00:24:57.543 -------------- 00:24:57.543 Get Log Page (02h): Supported 00:24:57.543 Identify (06h): Supported 00:24:57.543 Abort (08h): Supported 00:24:57.543 Set Features (09h): Supported 00:24:57.543 Get Features (0Ah): Supported 00:24:57.543 Asynchronous Event Request (0Ch): Supported 00:24:57.543 Keep Alive (18h): Supported 00:24:57.543 I/O Commands 00:24:57.543 ------------ 00:24:57.543 Flush (00h): Supported LBA-Change 00:24:57.543 Write (01h): Supported LBA-Change 00:24:57.543 Read (02h): Supported 00:24:57.543 Compare (05h): Supported 00:24:57.543 Write Zeroes (08h): Supported LBA-Change 00:24:57.543 Dataset Management (09h): Supported LBA-Change 00:24:57.543 Copy (19h): Supported LBA-Change 00:24:57.543 00:24:57.543 Error Log 00:24:57.543 ========= 00:24:57.543 00:24:57.543 Arbitration 00:24:57.543 =========== 00:24:57.543 Arbitration Burst: 1 00:24:57.543 00:24:57.543 Power Management 00:24:57.543 ================ 00:24:57.543 Number of Power States: 1 00:24:57.543 Current Power State: Power State #0 00:24:57.543 Power State #0: 00:24:57.543 Max Power: 0.00 W 00:24:57.543 Non-Operational State: Operational 00:24:57.543 Entry Latency: Not Reported 00:24:57.543 Exit Latency: Not Reported 00:24:57.543 Relative Read Throughput: 0 00:24:57.543 Relative Read Latency: 0 00:24:57.543 Relative Write Throughput: 0 00:24:57.543 Relative Write Latency: 0 00:24:57.543 Idle Power: Not Reported 00:24:57.543 Active Power: Not Reported 00:24:57.543 Non-Operational Permissive Mode: Not Supported 00:24:57.543 00:24:57.543 Health Information 00:24:57.543 ================== 00:24:57.543 Critical Warnings: 00:24:57.543 Available Spare Space: OK 00:24:57.543 Temperature: OK 00:24:57.543 Device Reliability: OK 00:24:57.543 Read Only: No 00:24:57.543 Volatile Memory Backup: OK 00:24:57.543 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:57.543 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:57.543 Available Spare: 0% 00:24:57.543 Available Spare Threshold: 0% 00:24:57.543 Life Percentage Used:[2024-10-07 13:35:39.217547] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.543 [2024-10-07 13:35:39.217559] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a26760) 00:24:57.543 [2024-10-07 13:35:39.217570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.543 [2024-10-07 13:35:39.217592] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86f00, cid 7, qid 0 00:24:57.544 [2024-10-07 13:35:39.217694] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.217719] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.217726] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.217732] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86f00) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.217776] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:57.544 [2024-10-07 13:35:39.217796] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86480) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.217806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.544 [2024-10-07 13:35:39.217815] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86600) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.217823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.544 [2024-10-07 13:35:39.217831] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86780) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.217838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.544 [2024-10-07 13:35:39.217846] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.217853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.544 [2024-10-07 13:35:39.217865] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.217873] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.217879] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.217890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.217916] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218026] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218033] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218040] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218051] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218059] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218065] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218101] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218198] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218212] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218218] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218225] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218232] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:57.544 [2024-10-07 13:35:39.218240] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:57.544 [2024-10-07 13:35:39.218255] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218264] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218270] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218301] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218369] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218381] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218388] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218394] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218410] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218418] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218425] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218455] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218525] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218538] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218545] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218552] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218567] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218577] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218587] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218618] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218713] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218728] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218735] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218742] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218757] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218766] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218773] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218804] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.218883] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.218896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.218903] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.218924] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218934] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.218940] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.218950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.218970] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.219041] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.219053] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.219059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219066] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.219081] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219090] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219096] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.219106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.219126] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.219219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.219232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.219239] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219245] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.219261] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219270] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219276] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.544 [2024-10-07 13:35:39.219290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.544 [2024-10-07 13:35:39.219312] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.544 [2024-10-07 13:35:39.219382] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.544 [2024-10-07 13:35:39.219394] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.544 [2024-10-07 13:35:39.219401] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.544 [2024-10-07 13:35:39.219407] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.544 [2024-10-07 13:35:39.219423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219432] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219438] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.219448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.219468] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.219546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.219560] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.219566] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219573] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.219588] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219597] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219604] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.219614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.219633] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.219729] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.219744] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.219751] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219758] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.219773] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219789] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.219799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.219819] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.219895] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.219909] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.219916] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219922] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.219938] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219946] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.219953] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.219963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.219987] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.220060] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.220074] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.220080] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220087] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.220102] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220111] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.220128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.220148] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.220217] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.220231] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.220237] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220244] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.220259] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220268] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220274] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.220284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.220305] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.220373] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.220386] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.220393] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220399] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.220415] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220424] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220430] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.220440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.220461] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.220529] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.220541] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.220548] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220554] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.220570] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220579] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.220585] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.220595] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.220614] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.224683] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.224700] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.224707] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.224728] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.224746] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.224756] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.224762] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a26760) 00:24:57.545 [2024-10-07 13:35:39.224772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:57.545 [2024-10-07 13:35:39.224794] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a86900, cid 3, qid 0 00:24:57.545 [2024-10-07 13:35:39.224882] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:57.545 [2024-10-07 13:35:39.224896] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:57.545 [2024-10-07 13:35:39.224903] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:57.545 [2024-10-07 13:35:39.224909] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a86900) on tqpair=0x1a26760 00:24:57.545 [2024-10-07 13:35:39.224922] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:57.545 0% 00:24:57.545 Data Units Read: 0 00:24:57.545 Data Units Written: 0 00:24:57.545 Host Read Commands: 0 00:24:57.545 Host Write Commands: 0 00:24:57.545 Controller Busy Time: 0 minutes 00:24:57.545 Power Cycles: 0 00:24:57.545 Power On Hours: 0 hours 00:24:57.545 Unsafe Shutdowns: 0 00:24:57.545 Unrecoverable Media Errors: 0 00:24:57.545 Lifetime Error Log Entries: 0 00:24:57.545 Warning Temperature Time: 0 minutes 00:24:57.545 Critical Temperature Time: 0 minutes 00:24:57.545 00:24:57.545 Number of Queues 00:24:57.545 ================ 00:24:57.545 Number of I/O Submission Queues: 127 00:24:57.545 Number of I/O Completion Queues: 127 00:24:57.545 00:24:57.545 Active Namespaces 00:24:57.545 ================= 00:24:57.545 Namespace ID:1 00:24:57.545 Error Recovery Timeout: Unlimited 00:24:57.545 Command Set Identifier: NVM (00h) 00:24:57.545 Deallocate: Supported 00:24:57.545 Deallocated/Unwritten Error: Not Supported 00:24:57.545 Deallocated Read Value: Unknown 00:24:57.545 Deallocate in Write Zeroes: Not Supported 00:24:57.545 Deallocated Guard Field: 0xFFFF 00:24:57.545 Flush: Supported 00:24:57.545 Reservation: Supported 00:24:57.545 Namespace Sharing Capabilities: Multiple Controllers 00:24:57.545 Size (in LBAs): 131072 (0GiB) 00:24:57.545 Capacity (in LBAs): 131072 (0GiB) 00:24:57.545 Utilization (in LBAs): 131072 (0GiB) 00:24:57.545 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:57.545 EUI64: ABCDEF0123456789 00:24:57.545 UUID: b60bec75-10fa-4f9b-af3d-425d29396079 00:24:57.545 Thin Provisioning: Not Supported 00:24:57.545 Per-NS Atomic Units: Yes 00:24:57.545 Atomic Boundary Size (Normal): 0 00:24:57.545 Atomic Boundary Size (PFail): 0 00:24:57.545 Atomic Boundary Offset: 0 00:24:57.545 Maximum Single Source Range Length: 65535 00:24:57.545 Maximum Copy Length: 65535 00:24:57.545 Maximum Source Range Count: 1 00:24:57.545 NGUID/EUI64 Never Reused: No 00:24:57.545 Namespace Write Protected: No 00:24:57.545 Number of LBA Formats: 1 00:24:57.545 Current LBA Format: LBA Format #00 00:24:57.545 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:57.545 00:24:57.545 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:57.545 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.545 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.545 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # nvmfcleanup 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.806 rmmod nvme_tcp 00:24:57.806 rmmod nvme_fabrics 00:24:57.806 rmmod nvme_keyring 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@515 -- # '[' -n 1862716 ']' 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # killprocess 1862716 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 1862716 ']' 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 1862716 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1862716 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1862716' 00:24:57.806 killing process with pid 1862716 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 1862716 00:24:57.806 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 1862716 00:24:58.064 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:24:58.064 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:24:58.064 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-save 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@789 -- # iptables-restore 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.065 13:35:39 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:00.606 00:25:00.606 real 0m5.813s 00:25:00.606 user 0m5.293s 00:25:00.606 sys 0m1.986s 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:00.606 ************************************ 00:25:00.606 END TEST nvmf_identify 00:25:00.606 ************************************ 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.606 ************************************ 00:25:00.606 START TEST nvmf_perf 00:25:00.606 ************************************ 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:00.606 * Looking for test storage... 00:25:00.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:00.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.606 --rc genhtml_branch_coverage=1 00:25:00.606 --rc genhtml_function_coverage=1 00:25:00.606 --rc genhtml_legend=1 00:25:00.606 --rc geninfo_all_blocks=1 00:25:00.606 --rc geninfo_unexecuted_blocks=1 00:25:00.606 00:25:00.606 ' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:00.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.606 --rc genhtml_branch_coverage=1 00:25:00.606 --rc genhtml_function_coverage=1 00:25:00.606 --rc genhtml_legend=1 00:25:00.606 --rc geninfo_all_blocks=1 00:25:00.606 --rc geninfo_unexecuted_blocks=1 00:25:00.606 00:25:00.606 ' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:00.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.606 --rc genhtml_branch_coverage=1 00:25:00.606 --rc genhtml_function_coverage=1 00:25:00.606 --rc genhtml_legend=1 00:25:00.606 --rc geninfo_all_blocks=1 00:25:00.606 --rc geninfo_unexecuted_blocks=1 00:25:00.606 00:25:00.606 ' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:00.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:00.606 --rc genhtml_branch_coverage=1 00:25:00.606 --rc genhtml_function_coverage=1 00:25:00.606 --rc genhtml_legend=1 00:25:00.606 --rc geninfo_all_blocks=1 00:25:00.606 --rc geninfo_unexecuted_blocks=1 00:25:00.606 00:25:00.606 ' 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.606 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:00.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:00.607 13:35:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:02.512 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:25:02.513 Found 0000:09:00.0 (0x8086 - 0x1592) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:25:02.513 Found 0000:09:00.1 (0x8086 - 0x1592) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:02.513 Found net devices under 0000:09:00.0: cvl_0_0 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:02.513 Found net devices under 0000:09:00.1: cvl_0_1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # is_hw=yes 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:02.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:25:02.513 00:25:02.513 --- 10.0.0.2 ping statistics --- 00:25:02.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.513 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:02.513 00:25:02.513 --- 10.0.0.1 ping statistics --- 00:25:02.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.513 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # return 0 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:02.513 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # nvmfpid=1864709 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # waitforlisten 1864709 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 1864709 ']' 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.514 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.774 [2024-10-07 13:35:44.236819] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:02.774 [2024-10-07 13:35:44.236904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.774 [2024-10-07 13:35:44.299560] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:02.774 [2024-10-07 13:35:44.404995] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.774 [2024-10-07 13:35:44.405052] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.774 [2024-10-07 13:35:44.405075] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.774 [2024-10-07 13:35:44.405086] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.775 [2024-10-07 13:35:44.405096] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.775 [2024-10-07 13:35:44.406568] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.775 [2024-10-07 13:35:44.406634] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.775 [2024-10-07 13:35:44.406708] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.775 [2024-10-07 13:35:44.406712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:03.034 13:35:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:06.393 13:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:06.393 13:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:06.393 13:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:84:00.0 00:25:06.393 13:35:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:06.680 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:06.680 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:84:00.0 ']' 00:25:06.680 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:06.680 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:06.680 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:06.937 [2024-10-07 13:35:48.507445] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.937 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:07.195 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:07.195 13:35:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:07.453 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:07.453 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:07.711 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:07.968 [2024-10-07 13:35:49.579428] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.968 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:08.228 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:84:00.0 ']' 00:25:08.228 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:25:08.228 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:08.228 13:35:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:84:00.0' 00:25:09.606 Initializing NVMe Controllers 00:25:09.606 Attached to NVMe Controller at 0000:84:00.0 [8086:0a54] 00:25:09.606 Associating PCIE (0000:84:00.0) NSID 1 with lcore 0 00:25:09.606 Initialization complete. Launching workers. 00:25:09.606 ======================================================== 00:25:09.606 Latency(us) 00:25:09.606 Device Information : IOPS MiB/s Average min max 00:25:09.606 PCIE (0000:84:00.0) NSID 1 from core 0: 82918.55 323.90 385.42 32.10 4375.83 00:25:09.606 ======================================================== 00:25:09.606 Total : 82918.55 323.90 385.42 32.10 4375.83 00:25:09.606 00:25:09.606 13:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:10.987 Initializing NVMe Controllers 00:25:10.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.987 Initialization complete. Launching workers. 00:25:10.987 ======================================================== 00:25:10.987 Latency(us) 00:25:10.987 Device Information : IOPS MiB/s Average min max 00:25:10.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10384.69 138.39 45435.81 00:25:10.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.00 0.16 25909.81 7954.06 47904.26 00:25:10.987 ======================================================== 00:25:10.987 Total : 140.00 0.55 14820.44 138.39 47904.26 00:25:10.987 00:25:10.987 13:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:12.363 Initializing NVMe Controllers 00:25:12.363 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:12.364 Initialization complete. Launching workers. 00:25:12.364 ======================================================== 00:25:12.364 Latency(us) 00:25:12.364 Device Information : IOPS MiB/s Average min max 00:25:12.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8479.48 33.12 3774.56 697.11 10531.66 00:25:12.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3764.01 14.70 8516.36 4152.44 16813.32 00:25:12.364 ======================================================== 00:25:12.364 Total : 12243.49 47.83 5232.33 697.11 16813.32 00:25:12.364 00:25:12.364 13:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:12.364 13:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:12.364 13:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:14.893 Initializing NVMe Controllers 00:25:14.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.893 Controller IO queue size 128, less than required. 00:25:14.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.893 Controller IO queue size 128, less than required. 00:25:14.893 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.893 Initialization complete. Launching workers. 00:25:14.893 ======================================================== 00:25:14.893 Latency(us) 00:25:14.893 Device Information : IOPS MiB/s Average min max 00:25:14.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1703.11 425.78 76283.81 48298.89 100782.08 00:25:14.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 585.51 146.38 229697.62 91813.77 357972.31 00:25:14.893 ======================================================== 00:25:14.893 Total : 2288.61 572.15 115532.31 48298.89 357972.31 00:25:14.893 00:25:14.893 13:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:15.461 No valid NVMe controllers or AIO or URING devices found 00:25:15.461 Initializing NVMe Controllers 00:25:15.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.461 Controller IO queue size 128, less than required. 00:25:15.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.461 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:15.461 Controller IO queue size 128, less than required. 00:25:15.461 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:15.461 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:15.461 WARNING: Some requested NVMe devices were skipped 00:25:15.461 13:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:17.994 Initializing NVMe Controllers 00:25:17.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.994 Controller IO queue size 128, less than required. 00:25:17.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.994 Controller IO queue size 128, less than required. 00:25:17.994 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:17.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:17.994 Initialization complete. Launching workers. 00:25:17.994 00:25:17.994 ==================== 00:25:17.994 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:17.994 TCP transport: 00:25:17.994 polls: 8305 00:25:17.994 idle_polls: 5725 00:25:17.994 sock_completions: 2580 00:25:17.994 nvme_completions: 5305 00:25:17.994 submitted_requests: 8036 00:25:17.994 queued_requests: 1 00:25:17.994 00:25:17.994 ==================== 00:25:17.994 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:17.994 TCP transport: 00:25:17.994 polls: 11329 00:25:17.994 idle_polls: 8396 00:25:17.994 sock_completions: 2933 00:25:17.994 nvme_completions: 5489 00:25:17.994 submitted_requests: 8184 00:25:17.994 queued_requests: 1 00:25:17.994 ======================================================== 00:25:17.994 Latency(us) 00:25:17.994 Device Information : IOPS MiB/s Average min max 00:25:17.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1323.41 330.85 98688.61 66372.97 160549.15 00:25:17.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1369.32 342.33 95108.72 50653.25 132002.25 00:25:17.994 ======================================================== 00:25:17.994 Total : 2692.73 673.18 96868.15 50653.25 160549.15 00:25:17.994 00:25:17.994 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:17.994 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:18.252 rmmod nvme_tcp 00:25:18.252 rmmod nvme_fabrics 00:25:18.252 rmmod nvme_keyring 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@515 -- # '[' -n 1864709 ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # killprocess 1864709 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 1864709 ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 1864709 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1864709 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1864709' 00:25:18.252 killing process with pid 1864709 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 1864709 00:25:18.252 13:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 1864709 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-save 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@789 -- # iptables-restore 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.156 13:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:22.066 00:25:22.066 real 0m21.626s 00:25:22.066 user 1m6.172s 00:25:22.066 sys 0m5.623s 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:22.066 ************************************ 00:25:22.066 END TEST nvmf_perf 00:25:22.066 ************************************ 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.066 ************************************ 00:25:22.066 START TEST nvmf_fio_host 00:25:22.066 ************************************ 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:22.066 * Looking for test storage... 00:25:22.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.066 --rc genhtml_branch_coverage=1 00:25:22.066 --rc genhtml_function_coverage=1 00:25:22.066 --rc genhtml_legend=1 00:25:22.066 --rc geninfo_all_blocks=1 00:25:22.066 --rc geninfo_unexecuted_blocks=1 00:25:22.066 00:25:22.066 ' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.066 --rc genhtml_branch_coverage=1 00:25:22.066 --rc genhtml_function_coverage=1 00:25:22.066 --rc genhtml_legend=1 00:25:22.066 --rc geninfo_all_blocks=1 00:25:22.066 --rc geninfo_unexecuted_blocks=1 00:25:22.066 00:25:22.066 ' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.066 --rc genhtml_branch_coverage=1 00:25:22.066 --rc genhtml_function_coverage=1 00:25:22.066 --rc genhtml_legend=1 00:25:22.066 --rc geninfo_all_blocks=1 00:25:22.066 --rc geninfo_unexecuted_blocks=1 00:25:22.066 00:25:22.066 ' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:22.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:22.066 --rc genhtml_branch_coverage=1 00:25:22.066 --rc genhtml_function_coverage=1 00:25:22.066 --rc genhtml_legend=1 00:25:22.066 --rc geninfo_all_blocks=1 00:25:22.066 --rc geninfo_unexecuted_blocks=1 00:25:22.066 00:25:22.066 ' 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.066 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:22.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:22.067 13:36:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:25:23.970 Found 0000:09:00.0 (0x8086 - 0x1592) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:25:23.970 Found 0000:09:00.1 (0x8086 - 0x1592) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:23.970 Found net devices under 0000:09:00.0: cvl_0_0 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:23.970 Found net devices under 0000:09:00.1: cvl_0_1 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # is_hw=yes 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.970 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:24.229 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:24.229 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.294 ms 00:25:24.229 00:25:24.229 --- 10.0.0.2 ping statistics --- 00:25:24.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.229 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:24.229 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:24.229 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:25:24.229 00:25:24.229 --- 10.0.0.1 ping statistics --- 00:25:24.229 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:24.229 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # return 0 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1868609 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1868609 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 1868609 ']' 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.229 13:36:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.229 [2024-10-07 13:36:05.860438] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:24.229 [2024-10-07 13:36:05.860508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:24.229 [2024-10-07 13:36:05.921847] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:24.487 [2024-10-07 13:36:06.027920] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:24.487 [2024-10-07 13:36:06.027969] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:24.487 [2024-10-07 13:36:06.027983] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:24.487 [2024-10-07 13:36:06.027994] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:24.487 [2024-10-07 13:36:06.028004] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:24.487 [2024-10-07 13:36:06.029732] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.487 [2024-10-07 13:36:06.029785] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:24.487 [2024-10-07 13:36:06.029837] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.487 [2024-10-07 13:36:06.029841] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.487 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.487 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:25:24.487 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:24.745 [2024-10-07 13:36:06.437931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.004 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:25.004 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.004 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.004 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:25.261 Malloc1 00:25:25.261 13:36:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.520 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:25.778 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.036 [2024-10-07 13:36:07.567753] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.036 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:26.295 13:36:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:26.553 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:26.553 fio-3.35 00:25:26.553 Starting 1 thread 00:25:29.087 00:25:29.087 test: (groupid=0, jobs=1): err= 0: pid=1869321: Mon Oct 7 13:36:10 2024 00:25:29.087 read: IOPS=8726, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2007msec) 00:25:29.087 slat (nsec): min=1953, max=103770, avg=2574.15, stdev=1612.75 00:25:29.087 clat (usec): min=2097, max=14366, avg=7997.88, stdev=656.62 00:25:29.087 lat (usec): min=2124, max=14368, avg=8000.45, stdev=656.53 00:25:29.087 clat percentiles (usec): 00:25:29.087 | 1.00th=[ 6521], 5.00th=[ 6980], 10.00th=[ 7177], 20.00th=[ 7504], 00:25:29.087 | 30.00th=[ 7701], 40.00th=[ 7832], 50.00th=[ 8029], 60.00th=[ 8160], 00:25:29.087 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 8979], 00:25:29.087 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[11469], 99.95th=[12125], 00:25:29.087 | 99.99th=[13698] 00:25:29.087 bw ( KiB/s): min=33808, max=35752, per=100.00%, avg=34908.00, stdev=807.82, samples=4 00:25:29.087 iops : min= 8452, max= 8938, avg=8727.00, stdev=201.95, samples=4 00:25:29.087 write: IOPS=8726, BW=34.1MiB/s (35.7MB/s)(68.4MiB/2007msec); 0 zone resets 00:25:29.087 slat (nsec): min=2029, max=92711, avg=2660.99, stdev=1423.61 00:25:29.087 clat (usec): min=1497, max=13373, avg=6603.39, stdev=557.04 00:25:29.087 lat (usec): min=1503, max=13376, avg=6606.05, stdev=557.00 00:25:29.087 clat percentiles (usec): 00:25:29.087 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:25:29.087 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:25:29.087 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7242], 95.00th=[ 7439], 00:25:29.087 | 99.00th=[ 7832], 99.50th=[ 8029], 99.90th=[10290], 99.95th=[11863], 00:25:29.087 | 99.99th=[13304] 00:25:29.087 bw ( KiB/s): min=34680, max=35328, per=99.97%, avg=34894.00, stdev=294.62, samples=4 00:25:29.087 iops : min= 8670, max= 8832, avg=8723.50, stdev=73.65, samples=4 00:25:29.087 lat (msec) : 2=0.02%, 4=0.11%, 10=99.66%, 20=0.21% 00:25:29.087 cpu : usr=66.45%, sys=31.85%, ctx=73, majf=0, minf=37 00:25:29.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:29.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:29.087 issued rwts: total=17514,17514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.087 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:29.087 00:25:29.087 Run status group 0 (all jobs): 00:25:29.087 READ: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2007-2007msec 00:25:29.087 WRITE: bw=34.1MiB/s (35.7MB/s), 34.1MiB/s-34.1MiB/s (35.7MB/s-35.7MB/s), io=68.4MiB (71.7MB), run=2007-2007msec 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.087 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:29.088 13:36:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:29.088 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:29.088 fio-3.35 00:25:29.088 Starting 1 thread 00:25:31.619 00:25:31.619 test: (groupid=0, jobs=1): err= 0: pid=1869892: Mon Oct 7 13:36:13 2024 00:25:31.619 read: IOPS=8176, BW=128MiB/s (134MB/s)(256MiB/2007msec) 00:25:31.619 slat (nsec): min=2826, max=93459, avg=3741.73, stdev=1833.52 00:25:31.619 clat (usec): min=2285, max=16559, avg=8905.14, stdev=2044.83 00:25:31.619 lat (usec): min=2303, max=16562, avg=8908.88, stdev=2044.83 00:25:31.619 clat percentiles (usec): 00:25:31.619 | 1.00th=[ 4621], 5.00th=[ 5604], 10.00th=[ 6325], 20.00th=[ 7111], 00:25:31.619 | 30.00th=[ 7701], 40.00th=[ 8291], 50.00th=[ 8979], 60.00th=[ 9503], 00:25:31.619 | 70.00th=[10028], 80.00th=[10552], 90.00th=[11469], 95.00th=[12125], 00:25:31.619 | 99.00th=[14484], 99.50th=[15008], 99.90th=[16057], 99.95th=[16188], 00:25:31.619 | 99.99th=[16581] 00:25:31.619 bw ( KiB/s): min=57664, max=77216, per=51.20%, avg=66976.00, stdev=9752.85, samples=4 00:25:31.619 iops : min= 3604, max= 4826, avg=4186.00, stdev=609.55, samples=4 00:25:31.619 write: IOPS=4884, BW=76.3MiB/s (80.0MB/s)(137MiB/1795msec); 0 zone resets 00:25:31.619 slat (usec): min=30, max=185, avg=34.12, stdev= 6.20 00:25:31.619 clat (usec): min=6901, max=17276, avg=11692.86, stdev=1930.83 00:25:31.619 lat (usec): min=6933, max=17307, avg=11726.97, stdev=1930.92 00:25:31.619 clat percentiles (usec): 00:25:31.619 | 1.00th=[ 7635], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10028], 00:25:31.619 | 30.00th=[10552], 40.00th=[11076], 50.00th=[11600], 60.00th=[12125], 00:25:31.619 | 70.00th=[12649], 80.00th=[13304], 90.00th=[14222], 95.00th=[15139], 00:25:31.619 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:25:31.619 | 99.99th=[17171] 00:25:31.619 bw ( KiB/s): min=61344, max=79584, per=89.42%, avg=69888.00, stdev=9657.75, samples=4 00:25:31.619 iops : min= 3834, max= 4974, avg=4368.00, stdev=603.61, samples=4 00:25:31.619 lat (msec) : 4=0.16%, 10=52.48%, 20=47.36% 00:25:31.619 cpu : usr=78.56%, sys=19.99%, ctx=37, majf=0, minf=67 00:25:31.619 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:31.619 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:31.619 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:31.619 issued rwts: total=16410,8768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:31.619 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:31.619 00:25:31.619 Run status group 0 (all jobs): 00:25:31.619 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=256MiB (269MB), run=2007-2007msec 00:25:31.619 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=137MiB (144MB), run=1795-1795msec 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:31.620 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:31.620 rmmod nvme_tcp 00:25:31.879 rmmod nvme_fabrics 00:25:31.879 rmmod nvme_keyring 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@515 -- # '[' -n 1868609 ']' 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # killprocess 1868609 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 1868609 ']' 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 1868609 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1868609 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1868609' 00:25:31.879 killing process with pid 1868609 00:25:31.879 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 1868609 00:25:31.880 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 1868609 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-save 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # iptables-restore 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.138 13:36:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:34.049 00:25:34.049 real 0m12.283s 00:25:34.049 user 0m36.362s 00:25:34.049 sys 0m3.970s 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.049 ************************************ 00:25:34.049 END TEST nvmf_fio_host 00:25:34.049 ************************************ 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:34.049 13:36:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.309 ************************************ 00:25:34.309 START TEST nvmf_failover 00:25:34.309 ************************************ 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:34.309 * Looking for test storage... 00:25:34.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:34.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.309 --rc genhtml_branch_coverage=1 00:25:34.309 --rc genhtml_function_coverage=1 00:25:34.309 --rc genhtml_legend=1 00:25:34.309 --rc geninfo_all_blocks=1 00:25:34.309 --rc geninfo_unexecuted_blocks=1 00:25:34.309 00:25:34.309 ' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:34.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.309 --rc genhtml_branch_coverage=1 00:25:34.309 --rc genhtml_function_coverage=1 00:25:34.309 --rc genhtml_legend=1 00:25:34.309 --rc geninfo_all_blocks=1 00:25:34.309 --rc geninfo_unexecuted_blocks=1 00:25:34.309 00:25:34.309 ' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:34.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.309 --rc genhtml_branch_coverage=1 00:25:34.309 --rc genhtml_function_coverage=1 00:25:34.309 --rc genhtml_legend=1 00:25:34.309 --rc geninfo_all_blocks=1 00:25:34.309 --rc geninfo_unexecuted_blocks=1 00:25:34.309 00:25:34.309 ' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:34.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:34.309 --rc genhtml_branch_coverage=1 00:25:34.309 --rc genhtml_function_coverage=1 00:25:34.309 --rc genhtml_legend=1 00:25:34.309 --rc geninfo_all_blocks=1 00:25:34.309 --rc geninfo_unexecuted_blocks=1 00:25:34.309 00:25:34.309 ' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:21b7cb46-a602-e411-a339-001e67bc3be4 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=21b7cb46-a602-e411-a339-001e67bc3be4 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:34.309 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:34.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@467 -- # '[' -z tcp ']' 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # prepare_net_devs 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # local -g is_hw=no 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # remove_spdk_ns 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ phy != virt ]] 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # gather_supported_nvmf_pci_devs 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:34.310 13:36:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:36.219 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x1592)' 00:25:36.220 Found 0000:09:00.0 (0x8086 - 0x1592) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x1592)' 00:25:36.220 Found 0000:09:00.1 (0x8086 - 0x1592) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x1592 == \0\x\1\0\1\7 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x1592 == \0\x\1\0\1\9 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:36.220 Found net devices under 0000:09:00.0: cvl_0_0 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@408 -- # for pci in "${pci_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@409 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ tcp == tcp ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@415 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ up == up ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # (( 1 == 0 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@425 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@426 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:36.220 Found net devices under 0000:09:00.1: cvl_0_1 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # net_devs+=("${pci_net_devs[@]}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@430 -- # (( 2 == 0 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # is_hw=yes 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ yes == yes ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@443 -- # [[ tcp == tcp ]] 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # nvmf_tcp_init 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:36.220 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:36.479 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:36.479 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:36.479 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:36.479 13:36:17 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@788 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:36.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:36.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:25:36.479 00:25:36.479 --- 10.0.0.2 ping statistics --- 00:25:36.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.479 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:36.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:36.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:25:36.479 00:25:36.479 --- 10.0.0.1 ping statistics --- 00:25:36.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:36.479 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # return 0 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # '[' '' == iso ']' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # [[ tcp == \r\d\m\a ]] 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # [[ tcp == \t\c\p ]] 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@491 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@494 -- # '[' tcp == tcp ']' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@500 -- # modprobe nvme-tcp 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # timing_enter start_nvmf_tgt 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # nvmfpid=1871988 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # waitforlisten 1871988 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1871988 ']' 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.479 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.479 [2024-10-07 13:36:18.101915] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:36.479 [2024-10-07 13:36:18.102009] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.479 [2024-10-07 13:36:18.164987] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:36.738 [2024-10-07 13:36:18.274907] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.738 [2024-10-07 13:36:18.274965] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.738 [2024-10-07 13:36:18.274989] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.738 [2024-10-07 13:36:18.275001] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.738 [2024-10-07 13:36:18.275011] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.738 [2024-10-07 13:36:18.275890] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.738 [2024-10-07 13:36:18.275950] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.738 [2024-10-07 13:36:18.275946] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # timing_exit start_nvmf_tgt 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.738 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:36.996 [2024-10-07 13:36:18.679519] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.254 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:37.512 Malloc0 00:25:37.512 13:36:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:37.770 13:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:38.029 13:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:38.288 [2024-10-07 13:36:19.778992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:38.288 13:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:38.546 [2024-10-07 13:36:20.043818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:38.546 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:38.805 [2024-10-07 13:36:20.316805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1872271 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1872271 /var/tmp/bdevperf.sock 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 1872271 ']' 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:38.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:38.805 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:39.063 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:39.063 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:25:39.063 13:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:39.632 NVMe0n1 00:25:39.632 13:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:39.891 NVMe0n1 00:25:39.891 13:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1872399 00:25:39.891 13:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:39.891 13:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:40.827 13:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.086 [2024-10-07 13:36:22.739527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 [2024-10-07 13:36:22.739940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1556cd0 is same with the state(6) to be set 00:25:41.086 13:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:44.377 13:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:44.635 NVMe0n1 00:25:44.635 13:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:44.894 [2024-10-07 13:36:26.410871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.410997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.894 [2024-10-07 13:36:26.411430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 [2024-10-07 13:36:26.411505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15571c0 is same with the state(6) to be set 00:25:44.895 13:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:48.245 13:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.245 [2024-10-07 13:36:29.697437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.245 13:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:49.201 13:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:49.459 13:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1872399 00:25:56.059 { 00:25:56.059 "results": [ 00:25:56.059 { 00:25:56.059 "job": "NVMe0n1", 00:25:56.059 "core_mask": "0x1", 00:25:56.059 "workload": "verify", 00:25:56.059 "status": "finished", 00:25:56.059 "verify_range": { 00:25:56.059 "start": 0, 00:25:56.059 "length": 16384 00:25:56.059 }, 00:25:56.059 "queue_depth": 128, 00:25:56.059 "io_size": 4096, 00:25:56.059 "runtime": 15.05185, 00:25:56.059 "iops": 8437.700349126519, 00:25:56.059 "mibps": 32.95976698877546, 00:25:56.059 "io_failed": 0, 00:25:56.059 "io_timeout": 0, 00:25:56.059 "avg_latency_us": 15102.091746867456, 00:25:56.059 "min_latency_us": 3034.074074074074, 00:25:56.059 "max_latency_us": 44661.57037037037 00:25:56.059 } 00:25:56.059 ], 00:25:56.059 "core_count": 1 00:25:56.059 } 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1872271 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1872271 ']' 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1872271 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1872271 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1872271' 00:25:56.059 killing process with pid 1872271 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1872271 00:25:56.059 13:36:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1872271 00:25:56.059 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.059 [2024-10-07 13:36:20.384642] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:56.059 [2024-10-07 13:36:20.384754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872271 ] 00:25:56.059 [2024-10-07 13:36:20.442804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.059 [2024-10-07 13:36:20.556923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.059 Running I/O for 15 seconds... 00:25:56.059 8407.00 IOPS, 32.84 MiB/s [2024-10-07T11:36:37.771Z] [2024-10-07 13:36:22.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.742973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.059 [2024-10-07 13:36:22.743163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.059 [2024-10-07 13:36:22.743196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.059 [2024-10-07 13:36:22.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.059 [2024-10-07 13:36:22.743227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.743981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.743994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.060 [2024-10-07 13:36:22.744426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.060 [2024-10-07 13:36:22.744441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.744974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.744988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.745017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.745049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.061 [2024-10-07 13:36:22.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.061 [2024-10-07 13:36:22.745641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.061 [2024-10-07 13:36:22.745651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.061 [2024-10-07 13:36:22.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:25:56.061 [2024-10-07 13:36:22.745682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.745959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.745970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.745983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.745996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.062 [2024-10-07 13:36:22.746869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:25:56.062 [2024-10-07 13:36:22.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.062 [2024-10-07 13:36:22.746894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.062 [2024-10-07 13:36:22.746905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.063 [2024-10-07 13:36:22.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:25:56.063 [2024-10-07 13:36:22.746928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.063 [2024-10-07 13:36:22.746941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.063 [2024-10-07 13:36:22.746952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.063 [2024-10-07 13:36:22.746963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:25:56.063 [2024-10-07 13:36:22.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.063 [2024-10-07 13:36:22.746992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.063 [2024-10-07 13:36:22.747003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.063 [2024-10-07 13:36:22.747015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:25:56.063 [2024-10-07 13:36:22.747027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.063 [2024-10-07 13:36:22.747040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.063 [2024-10-07 13:36:22.747050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.063 [2024-10-07 13:36:22.747061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:25:56.063 [2024-10-07 13:36:22.747074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.063 [2024-10-07 13:36:22.747131] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d12030 was disconnected and freed. reset controller. 00:25:56.063 [2024-10-07 13:36:22.748447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.748514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.748697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.748726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.748743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.748769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.748793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.748809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.748826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.748853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.758603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.758822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.758854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.758872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.758897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.758922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.758937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.758951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.758975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.768710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.768863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.768900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.768919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.768944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.768969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.768985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.768999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.769024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.781606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.782234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.782266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.782283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.782515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.782587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.782609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.782624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.782818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.796095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.796273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.796304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.796322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.796348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.796372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.796388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.796402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.796427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.806180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.806367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.806396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.806414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.806439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.806470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.806487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.806501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.806525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.816266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.816388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.816416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.063 [2024-10-07 13:36:22.816432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.063 [2024-10-07 13:36:22.816457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.063 [2024-10-07 13:36:22.816480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.063 [2024-10-07 13:36:22.816494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.063 [2024-10-07 13:36:22.816508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.063 [2024-10-07 13:36:22.816532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.063 [2024-10-07 13:36:22.828901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.063 [2024-10-07 13:36:22.829511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.063 [2024-10-07 13:36:22.829543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.829561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.829787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.829844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.829866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.829880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.830063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.844466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.845167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.845199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.845216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.845613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.845847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.845871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.845886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.845943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.861338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.861528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.861558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.861575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.861602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.861627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.861642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.861656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.862297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.876380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.876589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.876619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.876636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.876662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.876698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.876714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.876727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.876752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.890696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.890850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.890879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.890896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.890922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.890961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.890980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.890993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.891018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.906025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.907322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.907354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.907378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.907937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.908204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.908230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.908245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.908449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.916116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.916264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.916293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.916311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.918949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.921758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.921786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.921803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.922840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.926199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.926365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.926396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.926413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.926439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.926464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.926480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.926493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.926518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.938577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.938714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.938743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.938760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.938786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.938810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.938831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.938846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.938870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.948881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.949096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.949127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.949146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.949253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.951468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.951497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.951512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.953880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.958982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.064 [2024-10-07 13:36:22.959181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.064 [2024-10-07 13:36:22.959211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.064 [2024-10-07 13:36:22.959228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.064 [2024-10-07 13:36:22.959488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.064 [2024-10-07 13:36:22.959638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.064 [2024-10-07 13:36:22.959688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.064 [2024-10-07 13:36:22.959704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.064 [2024-10-07 13:36:22.959820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.064 [2024-10-07 13:36:22.970412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:22.970729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:22.970761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:22.970779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:22.970829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:22.970858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:22.970873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:22.970886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:22.970912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:22.984883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:22.985300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:22.985331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:22.985349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:22.985553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:22.985611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:22.985648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:22.985662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:22.985717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:22.995247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:22.995480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:22.995511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:22.995530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:22.995638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:22.995774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:22.995798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:22.995813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:22.995922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.005340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.005473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.005504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.005522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.005547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.005571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.005587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.005600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.005625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.018968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.019325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.019358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.019376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.019640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.019726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.019749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.019764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.019948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.033862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.034019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.034050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.034068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.034253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.034325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.034347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.034361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.034386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.049102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.049629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.049661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.049688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.049906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.049964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.049991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.050006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.050188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.064212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.064377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.064406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.064423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.064449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.064474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.064489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.064509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.064534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.074299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.074502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.074532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.074548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.074575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.074599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.074615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.074628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.077342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.086163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.086408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.086441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.086459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.086567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.086709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.086730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.086743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.065 [2024-10-07 13:36:23.086845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.065 [2024-10-07 13:36:23.096357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.065 [2024-10-07 13:36:23.096623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.065 [2024-10-07 13:36:23.096656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.065 [2024-10-07 13:36:23.096685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.065 [2024-10-07 13:36:23.096870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.065 [2024-10-07 13:36:23.096944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.065 [2024-10-07 13:36:23.096971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.065 [2024-10-07 13:36:23.096985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.097010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.107233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.109615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.109647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.109674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.110639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.110958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.110984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.110999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.111245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.117357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.117511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.117540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.117558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.117583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.117607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.117623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.117637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.117662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.127545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.127776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.127806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.127823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.128009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.128067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.128087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.128101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.128142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.141771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.142089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.142121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.142154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.142359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.142423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.142444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.142458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.142484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.156422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.156599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.156628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.156645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.156678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.156706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.156721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.156735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.156760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.167416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.167705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.167736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.167753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.167863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.167988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.168009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.168038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.169003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.178797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.178961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.178991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.179008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.181564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.182525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.182551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.182580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.182785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.189062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.189253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.189282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.189299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.189325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.189350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.189365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.189379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.189403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.201172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.201345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.201375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.201393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.201419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.201444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.201460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.201474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.201498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.211260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.211417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.211446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.211463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.211488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.211512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.211527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.066 [2024-10-07 13:36:23.211541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.066 [2024-10-07 13:36:23.211566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.066 [2024-10-07 13:36:23.221343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.066 [2024-10-07 13:36:23.221495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.066 [2024-10-07 13:36:23.221530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.066 [2024-10-07 13:36:23.221548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.066 [2024-10-07 13:36:23.221690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.066 [2024-10-07 13:36:23.221907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.066 [2024-10-07 13:36:23.221931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.221961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.222010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.234474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.235119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.235152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.235170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.235404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.235504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.235525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.235538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.235580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.250130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.250438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.250471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.250489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.250550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.250578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.250594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.250608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.250875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.261967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.262189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.262219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.262236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.262344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.262461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.262482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.262496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.262602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.272403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.272641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.272680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.272700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.273206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.273238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.273253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.273266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.273290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.282558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.282681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.282721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.282738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.282764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.282788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.282803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.282816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.283076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.294211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.294443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.294475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.294494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.294604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.296779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.296807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.296822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.297703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.304300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.304498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.304528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.304546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.304572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.304598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.304613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.304627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.304651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.314629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.314773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.314802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.314820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.315003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.315084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.315105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.315119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.315143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.327960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.328615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.328647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.328674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.067 [2024-10-07 13:36:23.328898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.067 [2024-10-07 13:36:23.329125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.067 [2024-10-07 13:36:23.329151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.067 [2024-10-07 13:36:23.329166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.067 [2024-10-07 13:36:23.329217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.067 [2024-10-07 13:36:23.344201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.067 [2024-10-07 13:36:23.344474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.067 [2024-10-07 13:36:23.344507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.067 [2024-10-07 13:36:23.344531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.345040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.345277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.345302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.345318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.345370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.356679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.358976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.359010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.359028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.359843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.360253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.360278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.360307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.360385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.366767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.366949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.366977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.366994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.367019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.367044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.367058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.367071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.367095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.376881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.377060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.377089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.377106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.377131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.377593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.377638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.377655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.377886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.389863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.390474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.390506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.390524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.390754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.390812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.390832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.390847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.391039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.402806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.403046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.403078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.403096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.405471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.406339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.406378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.406393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.406803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.412896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.413050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.413079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.413096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.413122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.413146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.413162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.413176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.413200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.423099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.423261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.423292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.423310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.423335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.423363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.423378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.423393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.423877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.436386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.436755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.436787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.436805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.437009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.437082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.437118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.437132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.437324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.448287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.448527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.448569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.448588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.451102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.452275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.452302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.452332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.452796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.458372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.458586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.458615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.068 [2024-10-07 13:36:23.458631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.068 [2024-10-07 13:36:23.458664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.068 [2024-10-07 13:36:23.458699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.068 [2024-10-07 13:36:23.458714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.068 [2024-10-07 13:36:23.458727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.068 [2024-10-07 13:36:23.458753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.068 [2024-10-07 13:36:23.468579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.068 [2024-10-07 13:36:23.468769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.068 [2024-10-07 13:36:23.468798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.468816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.469000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.469057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.469093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.469107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.469132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.482642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.483232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.483265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.483283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.483502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.483559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.483580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.483593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.483619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.497522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.498162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.498194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.498212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.498447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.498519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.498540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.498561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.498765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.508630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.508840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.508871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.508888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.511076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.511458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.511484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.511499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.512285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.518724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.518874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.518903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.518920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.523070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.523271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.523295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.523309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.523417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.528809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.529139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.529171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.529190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.529242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.529270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.529286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.529300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.529325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.541181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.541886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.541919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.541937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.542190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.542400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.542440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.542455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.542506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.551275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.551461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.551491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.551508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.553172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.554986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.555013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.555027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.555673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.561997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.562152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.562182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.562199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.562225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.562249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.562265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.562278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.562302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.572097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.572230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.572259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.572277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.572477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.572554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.572574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.572603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.572628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 [2024-10-07 13:36:23.587648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.069 [2024-10-07 13:36:23.587812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.069 [2024-10-07 13:36:23.587841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.069 [2024-10-07 13:36:23.587859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.069 [2024-10-07 13:36:23.587885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.069 [2024-10-07 13:36:23.587910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.069 [2024-10-07 13:36:23.587925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.069 [2024-10-07 13:36:23.587940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.069 [2024-10-07 13:36:23.587965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.069 8354.00 IOPS, 32.63 MiB/s [2024-10-07T11:36:37.781Z] [2024-10-07 13:36:23.601946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.602152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.602182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.602199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.602308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.602435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.602456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.602470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.602573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.612030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.612210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.612239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.612256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.612282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.612306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.612322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.612336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.612366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.622113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.622275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.622305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.622322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.622522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.622592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.622612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.622640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.622674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.637123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.637321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.637353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.637371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.637397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.637422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.637437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.637451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.637475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.651129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.651515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.651547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.651565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.651781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.651998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.652022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.652037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.652088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.666411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.666829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.666866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.666884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.667089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.667161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.667182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.667196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.667237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.682499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.682879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.682911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.682928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.683133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.683191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.683212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.683225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.683251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.697756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.697910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.697941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.697959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.697984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.698009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.698024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.698038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.698062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.707840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.708004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.708035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.708053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.708078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.708108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.708124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.708137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.708162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.718079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.718279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.718309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.718326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.718351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.718375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.718390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.718404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.718428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.070 [2024-10-07 13:36:23.730575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.070 [2024-10-07 13:36:23.730921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.070 [2024-10-07 13:36:23.730954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.070 [2024-10-07 13:36:23.730972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.070 [2024-10-07 13:36:23.731316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.070 [2024-10-07 13:36:23.731395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.070 [2024-10-07 13:36:23.731417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.070 [2024-10-07 13:36:23.731446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.070 [2024-10-07 13:36:23.731629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.742536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.742750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.742780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.742799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.742908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.745953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.745980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.745995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.746834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.752621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.752796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.752826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.752844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.752869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.752894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.752909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.752922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.752946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.762728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.762877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.762908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.762925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.763110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.763169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.763191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.763205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.763230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.776924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.777362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.777394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.777411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.777621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.777687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.777709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.777722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.777748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.792069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.792430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.792462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.792489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.792542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.792570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.792585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.792598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.792872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.805947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.806066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.806097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.806115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.806140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.806165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.806180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.806193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.806218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.817026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.817269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.817300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.817318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.817428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.817555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.817576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.817605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.817758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.827113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.827362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.827391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.827408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.827434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.827459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.827481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.827495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.827745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.838080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.838289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.838320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.838338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.838522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.838596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.071 [2024-10-07 13:36:23.838617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.071 [2024-10-07 13:36:23.838631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.071 [2024-10-07 13:36:23.838684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.071 [2024-10-07 13:36:23.850386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.071 [2024-10-07 13:36:23.850620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.071 [2024-10-07 13:36:23.850652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.071 [2024-10-07 13:36:23.850682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.071 [2024-10-07 13:36:23.852996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.071 [2024-10-07 13:36:23.853867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.853892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.853906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.854297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.860474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.860630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.860660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.860687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.860714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.860738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.860753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.860767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.860791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.870651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.870820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.870850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.870867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.870893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.870917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.870933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.870947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.871434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.883684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.884267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.884300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.884318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.884553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.884609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.884646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.884662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.884867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.894830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.895062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.895096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.895114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.897348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.897657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.897713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.897731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.898789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.904917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.905137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.905167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.905184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.909372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.909568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.909593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.909608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.909725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.915166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.915392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.915423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.915440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.915625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.915703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.915727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.915742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.915767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.927279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.928004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.928037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.928066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.928290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.928499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.928538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.928553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.928603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.937368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.937519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.937550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.937568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.937594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.940138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.940164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.940194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.941109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.947741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.947924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.947964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.947982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.948008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.948036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.948052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.948065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.948090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.957828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.957976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.958006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.958024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.958208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.072 [2024-10-07 13:36:23.958281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.072 [2024-10-07 13:36:23.958303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.072 [2024-10-07 13:36:23.958317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.072 [2024-10-07 13:36:23.958341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.072 [2024-10-07 13:36:23.970925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.072 [2024-10-07 13:36:23.971562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.072 [2024-10-07 13:36:23.971593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.072 [2024-10-07 13:36:23.971610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.072 [2024-10-07 13:36:23.971839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:23.972137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:23.972176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:23.972190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:23.972260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:23.981469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:23.981735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:23.981767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:23.981784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:23.981890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:23.982001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:23.982023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:23.982038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:23.983043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:23.992079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:23.992257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:23.992288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:23.992305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:23.992331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:23.992355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:23.992371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:23.992385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:23.992409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.002163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.002315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.002346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.002363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.002389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.002413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.002428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.002442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.002466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.014889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.015288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.015320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.015338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.015441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.015477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.015494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.015507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.015700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.025104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.025246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.025276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.025293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.025727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.025885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.025910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.025924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.026045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.035202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.035366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.035397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.035414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.035682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.035818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.035842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.035856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.035964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.045441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.045603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.045634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.045652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.045683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.045709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.045724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.045738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.045769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.058275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.058514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.058545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.058563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.058588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.058731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.058756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.058771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.058952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.070936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.071155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.071186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.071204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.071313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.071424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.071447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.071461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.074484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.081616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.081818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.081849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.073 [2024-10-07 13:36:24.081866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.073 [2024-10-07 13:36:24.081892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.073 [2024-10-07 13:36:24.081916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.073 [2024-10-07 13:36:24.081931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.073 [2024-10-07 13:36:24.081945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.073 [2024-10-07 13:36:24.081969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.073 [2024-10-07 13:36:24.091713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.073 [2024-10-07 13:36:24.091873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.073 [2024-10-07 13:36:24.091903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.091925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.091952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.091989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.092007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.092021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.092046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.104281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.104482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.104512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.104530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.104724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.104784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.104805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.104819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.104844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.114986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.115236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.115266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.115284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.115394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.115521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.115543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.115557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.115687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.125082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.125276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.125306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.125323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.125348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.125378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.125394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.125408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.125432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.137041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.137316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.137346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.137364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.137567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.137626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.137678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.137695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.137736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.149135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.151288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.151321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.151339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.152011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.152298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.152324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.152338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.152556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.159410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.159530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.159559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.159576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.159602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.160028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.160051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.160072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.160098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.169648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.169823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.169854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.169871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.170056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.170127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.170148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.170177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.170201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.183723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.184110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.184149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.184167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.184374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.184447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.184467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.184481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.184506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.199528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.200174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.200206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.200224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.200604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.200719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.200741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.200755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.200937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.215498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.215643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.215682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.074 [2024-10-07 13:36:24.215708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.074 [2024-10-07 13:36:24.216335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.074 [2024-10-07 13:36:24.216580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.074 [2024-10-07 13:36:24.216604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.074 [2024-10-07 13:36:24.216619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.074 [2024-10-07 13:36:24.216678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.074 [2024-10-07 13:36:24.229382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.074 [2024-10-07 13:36:24.229500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.074 [2024-10-07 13:36:24.229531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.229563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.229588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.229628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.229643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.229656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.229691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.242046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.242288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.242318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.242335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.242451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.242578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.242601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.242615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.242808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.252132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.252309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.252339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.252356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.252381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.252405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.252426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.252441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.252466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.262215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.262410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.262441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.262459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.262484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.262508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.262523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.262537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.262562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.277185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.277684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.277727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.277745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.278186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.278479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.278504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.278518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.278735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.288739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.288980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.289011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.289029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.289136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.289247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.289283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.289297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.289398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.298828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.299000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.299030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.299048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.299073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.299098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.299113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.299126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.299151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.308922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.309110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.309140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.309157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.309183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.309207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.309222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.309236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.309260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.323112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.323406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.323437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.323454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.323673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.323882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.323921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.323936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.323986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.336381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.336990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.337023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.337040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.337281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.337821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.337847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.337869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.338178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.346470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.075 [2024-10-07 13:36:24.346688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.075 [2024-10-07 13:36:24.346719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.075 [2024-10-07 13:36:24.346736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.075 [2024-10-07 13:36:24.347500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.075 [2024-10-07 13:36:24.347703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.075 [2024-10-07 13:36:24.347728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.075 [2024-10-07 13:36:24.347743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.075 [2024-10-07 13:36:24.347850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.075 [2024-10-07 13:36:24.356553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.356705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.356735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.356752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.356777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.356802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.356818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.356831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.356855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.366636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.366803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.366834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.366852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.366877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.366901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.366917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.366940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.367431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.381617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.382221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.382259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.382277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.382341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.382369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.382385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.382399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.382423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.392598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.392881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.392913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.392931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.393041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.393152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.393189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.393203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.396852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.402708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.402861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.402891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.402908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.402933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.402957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.402973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.402986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.403019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.412799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.413114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.413151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.413171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.413223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.413251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.413267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.413281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.413464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.426889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.427020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.427049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.427066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.427092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.427116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.427132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.427145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.427170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.443746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.443997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.444030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.444048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.444074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.444099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.444114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.444127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.444152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.458282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.458427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.458457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.458475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.458501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.458531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.458548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.076 [2024-10-07 13:36:24.458562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.076 [2024-10-07 13:36:24.458586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.076 [2024-10-07 13:36:24.474000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.076 [2024-10-07 13:36:24.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.076 [2024-10-07 13:36:24.474149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.076 [2024-10-07 13:36:24.474166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.076 [2024-10-07 13:36:24.474192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.076 [2024-10-07 13:36:24.474217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.076 [2024-10-07 13:36:24.474233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.474246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.474271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.489592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.489776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.489807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.489824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.489850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.489875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.489891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.489904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.489929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.502912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.503141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.503172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.503190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.503447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.503605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.503630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.503645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.505879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.513283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.513500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.513530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.513548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.513876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.514250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.514274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.514288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.514349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.523370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.523559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.523590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.523606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.523631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.523657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.523682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.523696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.523880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.533639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.533883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.533913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.533930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.537045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.537828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.537853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.537868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.538301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.543750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.543875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.543903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.543926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.543953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.543976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.543991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.544006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.544030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.553836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.553984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.554014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.554031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.554057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.554082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.554098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.554111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.554136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.567856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.568011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.568040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.568057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.568082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.568106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.568121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.568135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.568160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.579754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.580011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.580044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.580062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.580171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.580309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.580334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.580348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.580387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 [2024-10-07 13:36:24.592958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.593499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.593530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.593548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.593776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.077 [2024-10-07 13:36:24.593986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.077 [2024-10-07 13:36:24.594009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.077 [2024-10-07 13:36:24.594023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.077 [2024-10-07 13:36:24.594105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.077 8390.33 IOPS, 32.77 MiB/s [2024-10-07T11:36:37.789Z] [2024-10-07 13:36:24.610227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.077 [2024-10-07 13:36:24.610597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.077 [2024-10-07 13:36:24.610630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.077 [2024-10-07 13:36:24.610647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.077 [2024-10-07 13:36:24.610705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.610734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.610751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.610764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.610789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.620564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.620724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.620754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.620772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.621743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.623688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.623714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.623728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.624264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.632553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.632839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.632872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.632890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.633010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.633136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.633157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.633171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.635426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.642640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.642791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.642821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.642838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.642863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.642887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.642903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.642916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.642941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.653458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.653663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.653701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.653718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.653903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.653960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.653980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.654010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.654035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.665228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.665510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.665542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.665566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.667064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.667734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.667759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.667773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.668029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.675315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.675465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.675494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.675511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.675536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.675560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.675575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.675589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.675613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.685542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.685730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.685760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.685778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.685803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.685827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.685844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.685857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.686042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.699646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.699861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.699892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.699909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.700108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.700193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.700235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.700249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.700291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.714179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.714415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.714447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.714464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.714490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.714514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.714529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.714542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.714566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.724265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.724482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.724512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.724529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.078 [2024-10-07 13:36:24.724555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.078 [2024-10-07 13:36:24.724579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.078 [2024-10-07 13:36:24.724593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.078 [2024-10-07 13:36:24.724606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.078 [2024-10-07 13:36:24.724630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.078 [2024-10-07 13:36:24.734347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.078 [2024-10-07 13:36:24.734546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.078 [2024-10-07 13:36:24.734576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.078 [2024-10-07 13:36:24.734595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.734622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.734647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.734663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.734686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.734711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.746585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.746778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.746809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.746826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.746853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.746881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.746897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.746910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.747388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.758541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.758787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.758838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.758948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.759059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.759081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.759095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.762082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.768627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.768807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.768837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.768854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.768880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.768905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.768921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.768934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.768958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.778716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.779000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.779032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.779051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.779109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.779138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.779154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.779167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.779350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.793152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.794112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.794145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.794163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.794555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.794804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.794831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.794845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.794897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.803548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.803703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.803733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.803750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.803776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.803800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.803815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.803829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.803853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.813634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.813823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.813851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.813868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.813894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.813918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.813934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.813955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.813981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.826167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.826415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.826457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.826475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.826682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.826755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.826776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.826790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.826815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.842491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.843096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.843129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.843147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.843393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.843451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.843487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.843502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.843528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.852984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.854870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.854903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.079 [2024-10-07 13:36:24.854921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.079 [2024-10-07 13:36:24.857124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.079 [2024-10-07 13:36:24.857851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.079 [2024-10-07 13:36:24.857876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.079 [2024-10-07 13:36:24.857890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.079 [2024-10-07 13:36:24.858330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.079 [2024-10-07 13:36:24.863233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.079 [2024-10-07 13:36:24.863446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.079 [2024-10-07 13:36:24.863479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.863498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.863524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.863547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.863563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.863577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.863601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.873333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.873467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.873495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.873513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.873537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.873577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.873593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.873606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.873801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.887200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.887734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.887766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.887783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.888000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.888208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.888233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.888248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.888313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.902521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.903054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.903086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.903104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.903491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.903608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.903631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.903646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.903840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.917917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.918064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.918093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.918111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.918137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.918160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.918176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.918190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.918214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.929814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.930049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.930081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.930098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.930207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.930318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.930339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.930353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.930486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.940438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.940833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.940866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.940884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.940930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.940973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.940988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.941001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.941031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.950525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.950878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.950911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.950929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.950980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.951166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.951189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.951203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.951255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.963770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.963938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.963967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.963984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.964010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.964034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.964051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.964064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.964088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.080 [2024-10-07 13:36:24.976776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.080 [2024-10-07 13:36:24.977013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.080 [2024-10-07 13:36:24.977044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.080 [2024-10-07 13:36:24.977062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.080 [2024-10-07 13:36:24.977169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.080 [2024-10-07 13:36:24.977282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.080 [2024-10-07 13:36:24.977304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.080 [2024-10-07 13:36:24.977318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.080 [2024-10-07 13:36:24.977440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:24.987418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:24.987844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:24.987876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:24.987905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:24.987952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:24.987981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:24.987996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:24.988010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:24.988035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:24.997563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:24.997745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:24.997777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:24.997795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:24.998107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:24.998186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:24.998206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:24.998219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:24.998262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.012130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.012487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.012519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.012537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.012757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.012816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.012838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.012852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.012878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.026378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.026532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.026561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.026579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.026604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.026630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.026651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.026673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.026713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.036465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.036591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.036620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.036637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.036662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.036726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.036741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.036755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.039406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.046553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.046799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.046830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.046847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.047028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.047084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.047104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.047118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.047143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.058934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.059085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.059116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.059134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.059160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.059185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.059201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.059215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.059240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.073864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.073992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.074023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.074041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.074067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.074091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.074106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.074121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.074146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.087491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.089646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.089686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.089706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.090392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.090673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.090698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.090713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.090917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.097754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.097907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.097937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.097954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.098367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.098401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.098417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.098431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.081 [2024-10-07 13:36:25.098456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.081 [2024-10-07 13:36:25.107851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.081 [2024-10-07 13:36:25.108021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.081 [2024-10-07 13:36:25.108051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.081 [2024-10-07 13:36:25.108074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.081 [2024-10-07 13:36:25.108260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.081 [2024-10-07 13:36:25.108318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.081 [2024-10-07 13:36:25.108338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.081 [2024-10-07 13:36:25.108352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.108377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.121643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.121986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.122019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.122037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.122258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.122316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.122336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.122350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.122375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.136552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.136688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.136720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.136737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.136762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.136786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.136802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.136815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.136840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.147421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.147671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.147704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.147722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.147848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.147965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.147986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.148006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.148127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.157510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.157645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.157697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.157721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.157746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.157784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.157803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.157817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.157841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.169229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.169438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.169469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.169487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.169694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.169773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.169795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.169809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.169835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.185129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.185495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.185528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.185545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.185792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.185852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.185874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.185888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.186071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.200241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.200398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.200429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.200447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.200473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.200498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.200513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.200527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.200551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.210851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.213731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.213765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.213782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.215354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.215423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.215443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.215457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.215483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.222289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.222442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.222472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.222490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.222515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.222540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.222555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.222569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.222594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.232493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.232703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.232734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.232752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.232942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.233015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.233037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.082 [2024-10-07 13:36:25.233051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.082 [2024-10-07 13:36:25.233075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.082 [2024-10-07 13:36:25.245549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.082 [2024-10-07 13:36:25.245873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.082 [2024-10-07 13:36:25.245906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.082 [2024-10-07 13:36:25.245924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.082 [2024-10-07 13:36:25.245975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.082 [2024-10-07 13:36:25.246004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.082 [2024-10-07 13:36:25.246019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.246032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.246057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.257097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.257396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.257428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.257446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.257556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.257676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.257708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.257726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.257842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.267185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.267456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.267487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.267505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.267531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.267556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.267572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.267592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.267618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.277288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.277472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.277502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.277520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.277716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.277805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.277827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.277841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.277867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.289824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.290283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.290315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.290332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.290537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.290608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.290629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.290642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.290692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.304310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.304771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.304804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.304822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.304885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.304914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.304930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.304958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.304986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.314414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.317080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.317118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.317137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.318411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.318699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.318724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.318738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.318859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.324499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.324682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.324711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.324728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.324754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.324778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.324793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.324806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.324831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.335947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.336257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.336290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.336308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.336361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.336546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.336569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.336583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.336650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.351717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.351896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.351926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.351943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.351970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.352000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.352016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.352030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.352055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.364748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.365119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.365152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.365170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.365376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.365441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.083 [2024-10-07 13:36:25.365462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.083 [2024-10-07 13:36:25.365491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.083 [2024-10-07 13:36:25.365517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.083 [2024-10-07 13:36:25.375715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.083 [2024-10-07 13:36:25.375973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.083 [2024-10-07 13:36:25.376005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.083 [2024-10-07 13:36:25.376023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.083 [2024-10-07 13:36:25.376131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.083 [2024-10-07 13:36:25.376242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.376263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.376277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.380475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.385805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.385972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.386003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.386020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.386045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.386070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.386085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.386098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.386129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.396152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.396474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.396506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.396523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.396575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.396603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.396619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.396632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.396657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.408695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.409414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.409446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.409464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.409708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.410253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.410278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.410292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.410515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.419341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.419536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.419567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.419585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.422140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.423036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.423062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.423092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.423448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.429586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.429741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.429771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.429794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.429821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.429845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.429860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.429873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.429897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.439681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.439861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.439891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.439909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.439935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.439959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.439974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.439988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.440012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.452672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.452898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.452930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.452948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.453132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.453191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.453213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.453243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.453268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.463195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.463349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.463379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.463397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.465968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.466879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.466929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.466944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.467298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.473466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.473730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.473760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.084 [2024-10-07 13:36:25.473778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.084 [2024-10-07 13:36:25.473803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.084 [2024-10-07 13:36:25.473827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.084 [2024-10-07 13:36:25.473843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.084 [2024-10-07 13:36:25.473856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.084 [2024-10-07 13:36:25.473881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.084 [2024-10-07 13:36:25.483831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.084 [2024-10-07 13:36:25.484023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.084 [2024-10-07 13:36:25.484054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.484072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.484097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.484122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.484137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.484150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.484174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.494208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.494465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.494497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.494515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.494622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.495992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.496018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.496032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.497285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.504303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.504501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.504530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.504547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.504572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.504596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.504611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.504625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.504649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.514386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.514542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.514572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.514590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.514615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.514639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.514655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.514679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.514707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.527402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.529466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.529499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.529517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.529613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.529642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.529658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.529682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.529709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.537485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.537620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.537649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.537687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.537721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.537746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.537762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.537775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.537799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.547571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.547723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.547754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.547772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.547798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.547822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.547837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.547851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.547875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.560946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.561298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.561329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.561348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.561552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.561610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.561631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.561645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.561680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.576144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.576262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.576293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.576311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.576337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.576360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.576376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.576399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.576425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.592191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.593051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.593083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.593100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.593344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.593569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.593593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.593608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.593836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 [2024-10-07 13:36:25.602277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.085 [2024-10-07 13:36:25.602437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.085 [2024-10-07 13:36:25.602467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.085 [2024-10-07 13:36:25.602484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.085 [2024-10-07 13:36:25.605162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.085 [2024-10-07 13:36:25.607980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.085 [2024-10-07 13:36:25.608008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.085 [2024-10-07 13:36:25.608022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.085 [2024-10-07 13:36:25.611489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.085 8412.75 IOPS, 32.86 MiB/s [2024-10-07T11:36:37.797Z] [2024-10-07 13:36:25.612447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.612664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.612698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.612716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.612741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.612765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.612780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.612794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.612818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.625127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.625276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.625307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.625325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.625350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.625374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.625390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.625403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.625427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.636227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.636496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.636527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.636545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.636653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.636790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.636812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.636826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.636930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.646330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.646453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.646482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.646499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.646524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.646547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.646562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.646574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.646598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.656414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.656534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.656565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.656583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.656798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.656872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.656894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.656908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.656934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.670263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.670411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.670441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.670459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.670485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.670509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.670525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.670538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.670563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.686426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.686701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.686733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.686751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.686777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.686802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.686817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.686830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.687329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.702019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.702613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.702645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.702662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.702889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.702964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.702985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.703027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.703055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.712380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.714259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.714291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.714309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.716446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.717197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.717221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.717234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.717642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.722572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.722765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.722796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.722813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.722839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.722864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.722879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.722892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.722917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.086 [2024-10-07 13:36:25.732657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.086 [2024-10-07 13:36:25.732843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.086 [2024-10-07 13:36:25.732875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.086 [2024-10-07 13:36:25.732893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.086 [2024-10-07 13:36:25.733130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.086 [2024-10-07 13:36:25.733205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.086 [2024-10-07 13:36:25.733240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.086 [2024-10-07 13:36:25.733256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.086 [2024-10-07 13:36:25.733441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.748244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.748610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.748646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.748673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.748885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.748943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.748965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.748979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.749005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.762573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.762983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.763016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.763034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.763088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.763274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.763298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.763315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.763367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.778087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.778608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.778640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.778657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.778884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.778943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.778964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.778978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.779160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.793203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.793325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.793355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.793372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.793397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.793428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.793445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.793458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.793483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.803856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.806731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.806763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.806782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.808363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.808432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.808451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.808465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.808490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.813944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.814114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.814145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.814162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.814187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.814211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.814227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.814240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.814264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.824296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.824519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.824549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.824566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.824762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.824820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.824842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.824856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.824887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.839726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.840108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.840151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.840168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.840400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.840471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.840493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.840507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.840533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.850075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.850365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.850396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.850415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.853781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.854639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.854671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.854688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.855069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.860158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.860288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.860316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.860333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.860357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.860381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.860396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.087 [2024-10-07 13:36:25.860409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.087 [2024-10-07 13:36:25.860433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.087 [2024-10-07 13:36:25.871920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.087 [2024-10-07 13:36:25.872153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.087 [2024-10-07 13:36:25.872184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.087 [2024-10-07 13:36:25.872207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.087 [2024-10-07 13:36:25.872410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.087 [2024-10-07 13:36:25.872482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.087 [2024-10-07 13:36:25.872517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.872532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.872575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.883951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.886068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.886101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.886119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.886817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.887079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.887103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.887118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.887336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.894040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.894185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.894215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.894233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.894674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.894722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.894737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.894751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.894791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.904426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.904621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.904652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.904677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.904706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.904731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.904753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.904768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.904793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.917301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.917459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.917490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.917508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.917533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.917557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.917573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.917587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.917611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.932561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.933681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.933713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.933731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.934177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.934490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.934516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.934531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.934602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.947195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.947325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.947356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.947374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.947400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.947424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.947439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.947453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.947478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.962657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.962866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.962898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.962917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.962943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.962968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.962984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.962998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.963023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.976208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.978349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.978381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.978406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.979070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.979357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.979383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.979397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.979613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.986297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.986847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.986879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.986898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.986925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.088 [2024-10-07 13:36:25.986949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.088 [2024-10-07 13:36:25.986965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.088 [2024-10-07 13:36:25.986979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.088 [2024-10-07 13:36:25.987003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.088 [2024-10-07 13:36:25.996385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.088 [2024-10-07 13:36:25.996533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.088 [2024-10-07 13:36:25.996563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.088 [2024-10-07 13:36:25.996581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.088 [2024-10-07 13:36:25.996782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:25.996841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:25.996863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:25.996878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:25.996903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.009966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.010114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.010146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.010164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.010190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.010215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.010230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.010243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.010269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.024809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.025009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.025041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.025059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.025085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.025110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.025126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.025139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.025164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.038583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.040073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.040106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.040123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.040610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.040739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.040763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.040783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.040809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.048848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.049000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.049031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.049049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.049074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.049098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.049114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.049128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.049152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.058935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.059221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.059253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.059270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.059322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.059350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.059365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.059379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.059405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.072796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.072939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.072970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.072988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.073013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.073050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.073069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.073083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.073108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.087728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.087879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.087907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.087924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.087953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.087977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.087992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.088006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.088031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.103863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.104076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.104107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.104125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.104151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.104201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.104224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.104238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.104263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.113968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.114103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.114149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.114166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.114192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.114224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.114241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.114255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.114280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.124684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.124820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.124851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.124868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.089 [2024-10-07 13:36:26.124899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.089 [2024-10-07 13:36:26.124925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.089 [2024-10-07 13:36:26.124940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.089 [2024-10-07 13:36:26.124954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.089 [2024-10-07 13:36:26.124979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.089 [2024-10-07 13:36:26.136077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.089 [2024-10-07 13:36:26.136408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.089 [2024-10-07 13:36:26.136442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.089 [2024-10-07 13:36:26.136461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.136512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.136541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.136556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.136569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.136594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.148974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.149212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.149244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.149262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.149371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.151532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.151560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.151575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.152394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.159064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.159219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.159248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.159265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.159290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.159314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.159330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.159350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.159376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.169157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.169360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.169391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.169408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.169434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.169617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.169642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.169662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.169726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.184228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.184659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.184699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.184720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.184925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.184998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.185019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.185033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.185058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.199552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.199673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.199703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.199721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.199747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.199771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.199786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.199800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.199825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.209638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.209856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.209891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.209909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.209934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.209967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.209982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.209996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.210021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.219797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.220007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.220050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.220068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.220094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.220118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.220134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.220147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.220172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.232597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.232787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.232819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.232836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.232862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.232887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.232902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.232916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.232940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.248176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.248299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.248330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.248348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.248374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.248405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.248421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.248434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.248459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.262659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.090 [2024-10-07 13:36:26.263411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.090 [2024-10-07 13:36:26.263443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.090 [2024-10-07 13:36:26.263461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.090 [2024-10-07 13:36:26.263709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.090 [2024-10-07 13:36:26.263920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.090 [2024-10-07 13:36:26.263944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.090 [2024-10-07 13:36:26.263969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.090 [2024-10-07 13:36:26.264036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.090 [2024-10-07 13:36:26.278177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.278871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.278904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.278931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.279306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.279377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.279414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.279428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.279492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.288266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.288428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.288467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.288484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.288510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.288533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.288549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.288562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.288593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.299803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.299988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.300020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.300038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.300064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.300088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.300104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.300117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.300143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.311676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.311998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.312031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.312048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.312535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.312775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.312800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.312815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.312867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.322112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.322325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.322357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.322375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.326697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.326778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.326800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.326815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.326841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.332393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.334620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.334653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.334692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.335385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.335416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.335432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.335444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.335470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.342496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.342826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.342858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.342876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.342939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.342968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.342994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.343007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.343190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.357708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.358009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.358040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.358059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.358109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.358148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.358163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.358176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.358201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.372696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.372822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.372855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.372872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.372898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.372923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.372943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.372958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.372994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.383348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.386247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.386279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.386300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.387880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.387961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.387982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.387996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.388022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.091 [2024-10-07 13:36:26.393549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.091 [2024-10-07 13:36:26.393711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.091 [2024-10-07 13:36:26.393742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.091 [2024-10-07 13:36:26.393760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.091 [2024-10-07 13:36:26.393785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.091 [2024-10-07 13:36:26.393809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.091 [2024-10-07 13:36:26.393825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.091 [2024-10-07 13:36:26.393838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.091 [2024-10-07 13:36:26.393863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.092 [2024-10-07 13:36:26.403637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.092 [2024-10-07 13:36:26.403860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.092 [2024-10-07 13:36:26.403890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.092 [2024-10-07 13:36:26.403908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.092 [2024-10-07 13:36:26.404092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.092 [2024-10-07 13:36:26.404164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.092 [2024-10-07 13:36:26.404186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.092 [2024-10-07 13:36:26.404200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.092 [2024-10-07 13:36:26.404224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.092 [2024-10-07 13:36:26.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.092 [2024-10-07 13:36:26.412643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.412960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.412975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.092 [2024-10-07 13:36:26.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.092 [2024-10-07 13:36:26.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.093 [2024-10-07 13:36:26.414502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.093 [2024-10-07 13:36:26.414515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.414878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.414971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.414987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.415012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.415071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.094 [2024-10-07 13:36:26.415130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.094 [2024-10-07 13:36:26.415865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.094 [2024-10-07 13:36:26.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.415910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.415924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.415954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.415985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.416014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.416059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.416116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.095 [2024-10-07 13:36:26.416144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.095 [2024-10-07 13:36:26.416201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:25:56.095 [2024-10-07 13:36:26.416216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.095 [2024-10-07 13:36:26.416244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.095 [2024-10-07 13:36:26.416255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:25:56.095 [2024-10-07 13:36:26.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416326] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d125a0 was disconnected and freed. reset controller. 00:25:56.095 [2024-10-07 13:36:26.416394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.095 [2024-10-07 13:36:26.416431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.095 [2024-10-07 13:36:26.416460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.095 [2024-10-07 13:36:26.416487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.095 [2024-10-07 13:36:26.416514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.095 [2024-10-07 13:36:26.416527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.417765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.417796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.417826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.417942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.417980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.095 [2024-10-07 13:36:26.417996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.418107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.418132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.095 [2024-10-07 13:36:26.418148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.418173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.418193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.418214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.418233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.418247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.418264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.418283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.418296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.418322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.418339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.427914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.428132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.428262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.428291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.095 [2024-10-07 13:36:26.428308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.428463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.428489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.095 [2024-10-07 13:36:26.428505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.428524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.428550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.428569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.428582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.428595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.428620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.428638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.428650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.428664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.428714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.439481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.439531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.441627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.441661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.095 [2024-10-07 13:36:26.441692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.441815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.441840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.095 [2024-10-07 13:36:26.441856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.095 [2024-10-07 13:36:26.442931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.442979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.095 [2024-10-07 13:36:26.443384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.443410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.443441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.443459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.095 [2024-10-07 13:36:26.443474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.095 [2024-10-07 13:36:26.443503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.095 [2024-10-07 13:36:26.443742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.443766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.095 [2024-10-07 13:36:26.450818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.450852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.095 [2024-10-07 13:36:26.451076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.095 [2024-10-07 13:36:26.451108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.451125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.451239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.451265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.451281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.451390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.451417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.451533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.451569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.451583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.451601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.451615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.451645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.451763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.451794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.461239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.461287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.461455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.461484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.461502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.461651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.461685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.461704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.461731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.461753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.461774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.461789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.461802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.461819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.461833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.461846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.461870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.461886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.472379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.472413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.472757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.472791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.472809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.472924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.472950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.472966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.473171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.473199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.473248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.473268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.473288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.473306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.473320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.473333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.473530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.473552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.488034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.488067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.488439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.488471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.488489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.488596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.488622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.488639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.489033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.489079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.489153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.489174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.489189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.489207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.489222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.489234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.489259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.489276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.503403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.503436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.503546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.503575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.503592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.503736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.503768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.503785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.503812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.503834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.503855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.503870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.503884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.503901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.503915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.096 [2024-10-07 13:36:26.503928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.096 [2024-10-07 13:36:26.503953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.503970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.096 [2024-10-07 13:36:26.519331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.519365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.096 [2024-10-07 13:36:26.519574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.519604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.096 [2024-10-07 13:36:26.519621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.519736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.096 [2024-10-07 13:36:26.519764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.096 [2024-10-07 13:36:26.519780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.096 [2024-10-07 13:36:26.519806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.519827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.096 [2024-10-07 13:36:26.519849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.096 [2024-10-07 13:36:26.519863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.519877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.519894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.519909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.519923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.519947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.519964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.532525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.532559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.532804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.532834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.097 [2024-10-07 13:36:26.532851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.532937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.532963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.097 [2024-10-07 13:36:26.532979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.533086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.533114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.535295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.535321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.535336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.535354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.535369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.535382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.536228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.536253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.542828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.542861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.543008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.543036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.097 [2024-10-07 13:36:26.543053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.543183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.543209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.097 [2024-10-07 13:36:26.543224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.543609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.543640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.543700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.543723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.543741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.543760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.543776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.543788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.543824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.543842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.552943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.553175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.553366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.553397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.097 [2024-10-07 13:36:26.553415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.553535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.553562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.097 [2024-10-07 13:36:26.553579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.553598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.553795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.553836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.553851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.553864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.553928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.553950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.553963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.553978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.554002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.563454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.563587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.563795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.563827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.097 [2024-10-07 13:36:26.563844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.564018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.564046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.097 [2024-10-07 13:36:26.564068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.564087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.564198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.564221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.564234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.564248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.566952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.566981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.566995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.567009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.568032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.097 [2024-10-07 13:36:26.573540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.573736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.097 [2024-10-07 13:36:26.573765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.097 [2024-10-07 13:36:26.573782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.097 [2024-10-07 13:36:26.573807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.097 [2024-10-07 13:36:26.573845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.097 [2024-10-07 13:36:26.573864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.097 [2024-10-07 13:36:26.573879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.097 [2024-10-07 13:36:26.573905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.097 [2024-10-07 13:36:26.573925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.574101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.574128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.574144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.574169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.574194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.574209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.574223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.574248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.583622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.583827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.583857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.098 [2024-10-07 13:36:26.583874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.583900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.584025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.584049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.584063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.584416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.584484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.584595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.584623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.584640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.584676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.584704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.584720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.584733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.584757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.597451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.597486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.597784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.597815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.098 [2024-10-07 13:36:26.597833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.597955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.597982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.597998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.598202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.598231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.598279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.598298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.598312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.598335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.598351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.598364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.598546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.598570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 8379.60 IOPS, 32.73 MiB/s [2024-10-07T11:36:37.810Z] [2024-10-07 13:36:26.612630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.612660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.612787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.612815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.612831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.612938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.612963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.098 [2024-10-07 13:36:26.612979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.613006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.613027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.613062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.613082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.613095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.613112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.613127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.613139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.613164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.613180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.625024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.625058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.625198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.625227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.098 [2024-10-07 13:36:26.625243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.625351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.625377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.625393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.625425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.625447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.625469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.625484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.625497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.625514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.625528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.625541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.625566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.625583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.098 [2024-10-07 13:36:26.638449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.638482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.098 [2024-10-07 13:36:26.638707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.638737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.098 [2024-10-07 13:36:26.638754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.638872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.098 [2024-10-07 13:36:26.638898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.098 [2024-10-07 13:36:26.638914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.098 [2024-10-07 13:36:26.639023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.639050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.098 [2024-10-07 13:36:26.639179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.639199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.639213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.639230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.098 [2024-10-07 13:36:26.639244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.098 [2024-10-07 13:36:26.639256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.098 [2024-10-07 13:36:26.642480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.642509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.648563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.648608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.648779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.648808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.099 [2024-10-07 13:36:26.648825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.648948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.648974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.099 [2024-10-07 13:36:26.648991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.649010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.649036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.649054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.649067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.649081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.649105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.649122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.649135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.649149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.649171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.658649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.658807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.658837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.099 [2024-10-07 13:36:26.658854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.659052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.659145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.659180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.659196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.659210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.659234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.659354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.659381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.099 [2024-10-07 13:36:26.659397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.659582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.659658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.659702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.659717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.659744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.673954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.674002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.674572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.674604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.099 [2024-10-07 13:36:26.674621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.674711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.674738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.099 [2024-10-07 13:36:26.674755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.674972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.675001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.675048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.675069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.675082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.675100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.675114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.675127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.675152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.675168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.689348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.689380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.689738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.689771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.099 [2024-10-07 13:36:26.689789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.689876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.689902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.099 [2024-10-07 13:36:26.689918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.690129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.690158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.099 [2024-10-07 13:36:26.690360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.690386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.690400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.690418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.099 [2024-10-07 13:36:26.690432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.099 [2024-10-07 13:36:26.690446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.099 [2024-10-07 13:36:26.690687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.690710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.099 [2024-10-07 13:36:26.704469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.704517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.099 [2024-10-07 13:36:26.704658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.099 [2024-10-07 13:36:26.704697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.099 [2024-10-07 13:36:26.704714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.099 [2024-10-07 13:36:26.704805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.704831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.100 [2024-10-07 13:36:26.704848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.704873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.704894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.704915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.704931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.704960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.704977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.704991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.705003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.705042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.705058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.720352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.720386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.720606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.720642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.100 [2024-10-07 13:36:26.720660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.720754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.720780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.100 [2024-10-07 13:36:26.720797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.720822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.720844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.720865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.720880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.720894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.720911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.720926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.720939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.720964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.720981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.736832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.736865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.737054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.737082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.100 [2024-10-07 13:36:26.737100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.737210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.737236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.100 [2024-10-07 13:36:26.737253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.737591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.737638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.738020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.738046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.738076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.738094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.738108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.738125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.738198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.738219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.752372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.752405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.752938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.752968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.100 [2024-10-07 13:36:26.752985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.753097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.753123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.100 [2024-10-07 13:36:26.753138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.753356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.753385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.753433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.753453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.753467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.753484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.753498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.753511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.753712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.753736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.762892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.762927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.100 [2024-10-07 13:36:26.764691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.764724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.100 [2024-10-07 13:36:26.764741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.764864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.100 [2024-10-07 13:36:26.764890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.100 [2024-10-07 13:36:26.764906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.100 [2024-10-07 13:36:26.767057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.767094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.100 [2024-10-07 13:36:26.767969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.767994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.768008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.768024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.100 [2024-10-07 13:36:26.768038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.100 [2024-10-07 13:36:26.768050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.100 [2024-10-07 13:36:26.768316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.768340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.100 [2024-10-07 13:36:26.773012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.773042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.773249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.773276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.101 [2024-10-07 13:36:26.773294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.773375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.773401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.101 [2024-10-07 13:36:26.773418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.773443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.773465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.773486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.773501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.773514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.773547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.773561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.773574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.773597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.773628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.783124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.783173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.783333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.783362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.101 [2024-10-07 13:36:26.783390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.783721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.783753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.101 [2024-10-07 13:36:26.783770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.783790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.784036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.784065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.784079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.784107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.784177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.784199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.784213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.784226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.784250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.794617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.794650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.794858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.794888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.101 [2024-10-07 13:36:26.794905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.794990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.795017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.101 [2024-10-07 13:36:26.795033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.795155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.795181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.797356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.797385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.797399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.797417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.797432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.797445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.798271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.798296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.804739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.804786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.804936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.804964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.101 [2024-10-07 13:36:26.804981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.805068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.805094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.101 [2024-10-07 13:36:26.805110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.805129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.805155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.805174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.805187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.805199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.805225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.805243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.805256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.101 [2024-10-07 13:36:26.805268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.101 [2024-10-07 13:36:26.805291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.101 [2024-10-07 13:36:26.814882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.814932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.101 [2024-10-07 13:36:26.815063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.815092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.101 [2024-10-07 13:36:26.815109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.815469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.101 [2024-10-07 13:36:26.815500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.101 [2024-10-07 13:36:26.815517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.101 [2024-10-07 13:36:26.815537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.815589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.101 [2024-10-07 13:36:26.815617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.101 [2024-10-07 13:36:26.815631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.815644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.815836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.815861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.815875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.815904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.815970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.828625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.828681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.829071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.829102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.102 [2024-10-07 13:36:26.829120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.829257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.829282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.102 [2024-10-07 13:36:26.829298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.829507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.829535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.829740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.829764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.829778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.829796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.829810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.829823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.829869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.829890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.842605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.842653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.843283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.843315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.102 [2024-10-07 13:36:26.843333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.843453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.843479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.102 [2024-10-07 13:36:26.843495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.843727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.843756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.843819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.843840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.843854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.843871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.843886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.843899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.843924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.843942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.854907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.854942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.855156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.855185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.102 [2024-10-07 13:36:26.855202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.855308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.855334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.102 [2024-10-07 13:36:26.855350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.855376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.855398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.855419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.855435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.855449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.855466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.855480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.855493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.855518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.855541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.867297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.867331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.867561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.867590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.102 [2024-10-07 13:36:26.867607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.867726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.867753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.102 [2024-10-07 13:36:26.867769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.867878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.867904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.868021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.868042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.868070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.868087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.868101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.868113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.871558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.871586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.877410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.877455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.102 [2024-10-07 13:36:26.877634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.877662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.102 [2024-10-07 13:36:26.877688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.877811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.102 [2024-10-07 13:36:26.877837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.102 [2024-10-07 13:36:26.877854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.102 [2024-10-07 13:36:26.877872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.878162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.102 [2024-10-07 13:36:26.878204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.878223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.878236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.878383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.878406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.102 [2024-10-07 13:36:26.878421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.102 [2024-10-07 13:36:26.878436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.102 [2024-10-07 13:36:26.878546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.102 [2024-10-07 13:36:26.888024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.888058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.888221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.888250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.888267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.888349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.888376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.888391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.888689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.888733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.888941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.888967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.888982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.889000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.889015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.889028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.889094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.889114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.900049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.900083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.902984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.903017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.903034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.903117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.903143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.903160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.903662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.903717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.903958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.903983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.903998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.904017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.904031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.904045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.904297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.904322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.910161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.910205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.910384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.910411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.910428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.910632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.910661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.910687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.910707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.910830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.910854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.910869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.910882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.910989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.911011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.911024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.911038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.911148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.920243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.920424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.920453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.920470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.920508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.920540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.920569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.920586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.920599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.920623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.920747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.920774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.920790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.920816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.920840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.920856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.920869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.920893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.932280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.932314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.932482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.932511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.932529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.932614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.932640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.932655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.932688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.932712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.103 [2024-10-07 13:36:26.932733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.932748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.932767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.932785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.103 [2024-10-07 13:36:26.932800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.103 [2024-10-07 13:36:26.932813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.103 [2024-10-07 13:36:26.932837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.932854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.103 [2024-10-07 13:36:26.945812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.945846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.103 [2024-10-07 13:36:26.946050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.946079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.103 [2024-10-07 13:36:26.946097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.103 [2024-10-07 13:36:26.946199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.103 [2024-10-07 13:36:26.946226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.103 [2024-10-07 13:36:26.946242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.946354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.946388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.946521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.946542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.946555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.946572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.946585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.946598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.949998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.950027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.955930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.955990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.956190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.956217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.104 [2024-10-07 13:36:26.956234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.956322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.956348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.104 [2024-10-07 13:36:26.956370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.956390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.956416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.956434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.956447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.956460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.956500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.956517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.956529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.956541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.956577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.966031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.966202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.966231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.104 [2024-10-07 13:36:26.966248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.966446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.966523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.966574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.966590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.966605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.966630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.966766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.966793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.104 [2024-10-07 13:36:26.966810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.967267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.967521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.967547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.967562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.967614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.979244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.979283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.979909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.979941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.104 [2024-10-07 13:36:26.979959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.980065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.980091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.104 [2024-10-07 13:36:26.980107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.980326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.980355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.980579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.980603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.980617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.980635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.980649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.980663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.980911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.980935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.990752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.990785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:26.991087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.991118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.104 [2024-10-07 13:36:26.991135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.991270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:26.991295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.104 [2024-10-07 13:36:26.991312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:26.991432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.991460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:26.991561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.991583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.991602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.991635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:26.991649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:26.991662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:26.992873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:26.992898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:27.000882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:27.000930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.104 [2024-10-07 13:36:27.001149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:27.001177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.104 [2024-10-07 13:36:27.001194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:27.001313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.104 [2024-10-07 13:36:27.001339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.104 [2024-10-07 13:36:27.001355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.104 [2024-10-07 13:36:27.001374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:27.001400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.104 [2024-10-07 13:36:27.001420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.104 [2024-10-07 13:36:27.001435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.104 [2024-10-07 13:36:27.001448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.104 [2024-10-07 13:36:27.001474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.104 [2024-10-07 13:36:27.001491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.001505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.001534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.001556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.010984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.011151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.011181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.105 [2024-10-07 13:36:27.011198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.011458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.011538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.011586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.011608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.011624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.011649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.011770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.011797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.011814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.011998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.012072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.012093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.012122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.012149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.024679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.024713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.025322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.025354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.105 [2024-10-07 13:36:27.025372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.025478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.025504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.025520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.025751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.025780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.025990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.026014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.026028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.026046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.026061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.026074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.026306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.026329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.035912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.035951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.036213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.036244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.036261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.036372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.036398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.105 [2024-10-07 13:36:27.036414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.038617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.038650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.039044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.039086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.039100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.039118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.039133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.039146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.039826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.039851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.047782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.047816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.048207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.048239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.105 [2024-10-07 13:36:27.048256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.048361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.048387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.048402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.048869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.048901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.049038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.049061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.049075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.049098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.049115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.049128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.049235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.049273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.057896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.057943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.058144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.058172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.058189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.058312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.058338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.105 [2024-10-07 13:36:27.058354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.105 [2024-10-07 13:36:27.058373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.058400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.105 [2024-10-07 13:36:27.058418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.058431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.058445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.058471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.058488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.105 [2024-10-07 13:36:27.058501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.105 [2024-10-07 13:36:27.058514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.105 [2024-10-07 13:36:27.058537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.105 [2024-10-07 13:36:27.068212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.068244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.105 [2024-10-07 13:36:27.068360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.105 [2024-10-07 13:36:27.068389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.105 [2024-10-07 13:36:27.068406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.068517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.068543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.106 [2024-10-07 13:36:27.068559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.069075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.069105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.069390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.069416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.069430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.069448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.069463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.069476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.069720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.069745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.078401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.080713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.080827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.080855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.106 [2024-10-07 13:36:27.080872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.081893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.081923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.106 [2024-10-07 13:36:27.081940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.081958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.082215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.082241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.082256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.082285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.082581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.082606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.082620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.082634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.082744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.088485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.088718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.088747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.106 [2024-10-07 13:36:27.088770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.088797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.088838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.088858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.088872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.088897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.091749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.091878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.091906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.106 [2024-10-07 13:36:27.091923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.091949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.092366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.092407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.092422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.092909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.098690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.098868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.098897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.106 [2024-10-07 13:36:27.098914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.099099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.099156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.099177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.099191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.099217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.101850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.101987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.102014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.106 [2024-10-07 13:36:27.102031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.102251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.102495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.102520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.102535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.102655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.110592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.110777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.110807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.106 [2024-10-07 13:36:27.110824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.111334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.111599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.111624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.111639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.111700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.112019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.106 [2024-10-07 13:36:27.112176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.106 [2024-10-07 13:36:27.112206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.106 [2024-10-07 13:36:27.112224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.106 [2024-10-07 13:36:27.112250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.106 [2024-10-07 13:36:27.112274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.106 [2024-10-07 13:36:27.112289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.106 [2024-10-07 13:36:27.112302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.106 [2024-10-07 13:36:27.112586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.106 [2024-10-07 13:36:27.120761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.120884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.120913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.120930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.122018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.122238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.122262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.122276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.122394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.122513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.122767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.122798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.122815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.123622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.125505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.125531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.125546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.126132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.130845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.131030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.131059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.131075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.131101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.131126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.131141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.131155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.131179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.132606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.132808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.132836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.132853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.132879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.132903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.132917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.132931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.132955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.141313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.141458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.141486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.141508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.141535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.141740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.141763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.141778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.141831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.142852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.142961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.142989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.143005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.143030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.143228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.143251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.143265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.143331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.154889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.155162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.155369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.155399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.155417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.156032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.156063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.156081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.156101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.156398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.156441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.156455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.156470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.156715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.156741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.156762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.156777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.156846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.165627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.165660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.165981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.166012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.166030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.166144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.166171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.166187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.166294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.166322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.107 [2024-10-07 13:36:27.166453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.166474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.166487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.166518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.107 [2024-10-07 13:36:27.166533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.107 [2024-10-07 13:36:27.166545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.107 [2024-10-07 13:36:27.166677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.166699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.107 [2024-10-07 13:36:27.175892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.175924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.107 [2024-10-07 13:36:27.176120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.176150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.107 [2024-10-07 13:36:27.176167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.176281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.107 [2024-10-07 13:36:27.176309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.107 [2024-10-07 13:36:27.176325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.107 [2024-10-07 13:36:27.176350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.176371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.176398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.176415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.176427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.176445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.176459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.176472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.176496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.176513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.186033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.186066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.186232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.186262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.108 [2024-10-07 13:36:27.186279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.186386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.186413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.108 [2024-10-07 13:36:27.186430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.186455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.186476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.186497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.186512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.186525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.186542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.186556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.186569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.186593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.186610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.196418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.196451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.196682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.196717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.108 [2024-10-07 13:36:27.196749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.196845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.196873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.108 [2024-10-07 13:36:27.196889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.197724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.197753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.199469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.199496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.199510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.199527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.199541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.199554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.200146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.200172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.206595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.206625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.206821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.206850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.108 [2024-10-07 13:36:27.206868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.206980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.207007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.108 [2024-10-07 13:36:27.207023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.207048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.207069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.207090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.207105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.207118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.207135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.207150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.207162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.207192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.207225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.216721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.216770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.216930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.216960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.108 [2024-10-07 13:36:27.216977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.217084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.217112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.108 [2024-10-07 13:36:27.217128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.217147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.217173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.217191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.217204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.217217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.217242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.217258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.217271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.217284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.217322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.108 [2024-10-07 13:36:27.231004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.231036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.108 [2024-10-07 13:36:27.231424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.231456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.108 [2024-10-07 13:36:27.231473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.231584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.108 [2024-10-07 13:36:27.231610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.108 [2024-10-07 13:36:27.231625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.108 [2024-10-07 13:36:27.231877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.231907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.108 [2024-10-07 13:36:27.232161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.232187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.232201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.232218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.108 [2024-10-07 13:36:27.232233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.108 [2024-10-07 13:36:27.232246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.108 [2024-10-07 13:36:27.232450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.232475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.245500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.246502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.246679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.246710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.109 [2024-10-07 13:36:27.246727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.246904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.246934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.246951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.246970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.247007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.247028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.247042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.247055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.247080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.247098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.247111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.247124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.247146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.260638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.260696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.261310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.261341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.109 [2024-10-07 13:36:27.261358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.261474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.261499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.261515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.261811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.261856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.262076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.262100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.262115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.262132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.262147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.262159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.262225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.262245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.276811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.276847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.277568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.277599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.277616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.277726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.277753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.109 [2024-10-07 13:36:27.277769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.277997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.278028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.278522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.278546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.278559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.278576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.278590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.278602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.278856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.278887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.289102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.289136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.289331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.289362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.109 [2024-10-07 13:36:27.289380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.289492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.289521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.289538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.291378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.291410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.292238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.292262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.292276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.292292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.292306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.292319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.292766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.292808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.299215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.299277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.299458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.299487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.299504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.299737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.299767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.109 [2024-10-07 13:36:27.299784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.299803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.299925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.299950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.299970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.299984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.300097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.300121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.109 [2024-10-07 13:36:27.300134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.109 [2024-10-07 13:36:27.300148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.109 [2024-10-07 13:36:27.300255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.109 [2024-10-07 13:36:27.309315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.109 [2024-10-07 13:36:27.309459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.109 [2024-10-07 13:36:27.309489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.109 [2024-10-07 13:36:27.309507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.109 [2024-10-07 13:36:27.309716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.109 [2024-10-07 13:36:27.309810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.309845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.309863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.309876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.309916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.310037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.310065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.110 [2024-10-07 13:36:27.310082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.310285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.310356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.310393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.310407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.310433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.324192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.324224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.324541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.324572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.110 [2024-10-07 13:36:27.324605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.324764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.324798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.110 [2024-10-07 13:36:27.324816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.325020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.325050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.325250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.325274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.325289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.325306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.325321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.325333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.325562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.325587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.339634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.339677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.339835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.339865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.110 [2024-10-07 13:36:27.339882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.339960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.339990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.110 [2024-10-07 13:36:27.340006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.340607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.340637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.340896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.340921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.340935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.340952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.340967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.340980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.341212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.341237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.354456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.354505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.354891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.354923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.110 [2024-10-07 13:36:27.354940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.355052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.355078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.110 [2024-10-07 13:36:27.355094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.355299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.355329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.355529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.355553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.355567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.355584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.355599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.355612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.355662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.355692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.369383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.369416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.369637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.369683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.110 [2024-10-07 13:36:27.369703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.369794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.110 [2024-10-07 13:36:27.369821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.110 [2024-10-07 13:36:27.369838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.110 [2024-10-07 13:36:27.369864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.369886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.110 [2024-10-07 13:36:27.369907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.369923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.369945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.369975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.110 [2024-10-07 13:36:27.369990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.110 [2024-10-07 13:36:27.370003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.110 [2024-10-07 13:36:27.370028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.370044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.110 [2024-10-07 13:36:27.384510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.384545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.110 [2024-10-07 13:36:27.385423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.385454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.385472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.385581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.385607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.385623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.386023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.386052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.386125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.386144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.386173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.386191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.386206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.386219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.386463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.386488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.401340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.401374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.401897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.401929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.401946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.402057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.402084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.402105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.402324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.402354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.402402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.402423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.402437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.402455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.402469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.402481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.402699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.402724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.413364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.413398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.415580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.415612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.415635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.415767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.415793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.415810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.416490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.416520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.416993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.417018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.417046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.417066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.417080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.417091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.417169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.417189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.425788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.425962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.426105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.426135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.426152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.426378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.426407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.426423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.426442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.426552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.426577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.426590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.426603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.429601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.429628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.429642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.429683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.431396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.435880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.436056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.436085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.436102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.436127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.436164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.436183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.436197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.436224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.436244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.436399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.436427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.436443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.436759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.436929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.111 [2024-10-07 13:36:27.436965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.111 [2024-10-07 13:36:27.436980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.111 [2024-10-07 13:36:27.437090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.111 [2024-10-07 13:36:27.446832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.446866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.111 [2024-10-07 13:36:27.447072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.447107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.111 [2024-10-07 13:36:27.447135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.447236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.111 [2024-10-07 13:36:27.447264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.111 [2024-10-07 13:36:27.447280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.111 [2024-10-07 13:36:27.447480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.447509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.111 [2024-10-07 13:36:27.447758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.447783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.447798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.447816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.447831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.447844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.447894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.447915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.461427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.461461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.461766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.461797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.112 [2024-10-07 13:36:27.461814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.461928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.461955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.112 [2024-10-07 13:36:27.461982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.462192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.462221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.462422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.462445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.462460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.462478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.462492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.462505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.462568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.462588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.477069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.477102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.477215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.477255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.112 [2024-10-07 13:36:27.477273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.477381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.477407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.112 [2024-10-07 13:36:27.477423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.477448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.477470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.477491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.477506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.477520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.477537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.477551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.477564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.477589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.477620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.494052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.494084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.494310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.494340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.112 [2024-10-07 13:36:27.494357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.494466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.494493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.112 [2024-10-07 13:36:27.494509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.494904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.494933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.495171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.495196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.495211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.495228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.495243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.495255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.495748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.495773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.509786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.509819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.509953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.509988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.112 [2024-10-07 13:36:27.510005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.510086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.510113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.112 [2024-10-07 13:36:27.510129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.510600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.510628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.510915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.510940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.510955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.510972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.511019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.511033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.511270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.511295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.525108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.525156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.112 [2024-10-07 13:36:27.525294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.525324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.112 [2024-10-07 13:36:27.525341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.525452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.112 [2024-10-07 13:36:27.525479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.112 [2024-10-07 13:36:27.525495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.112 [2024-10-07 13:36:27.525978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.526007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.112 [2024-10-07 13:36:27.526285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.526311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.526325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.526342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.112 [2024-10-07 13:36:27.526374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.112 [2024-10-07 13:36:27.526387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.112 [2024-10-07 13:36:27.526622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.526647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.112 [2024-10-07 13:36:27.537134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.537167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.537402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.537432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.113 [2024-10-07 13:36:27.537450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.537562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.537589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.113 [2024-10-07 13:36:27.537605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.537728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.537769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.539916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.539942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.539964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.539981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.539996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.540008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.540865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.540891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.547252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.547297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.547478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.547507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.113 [2024-10-07 13:36:27.547525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.547639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.547675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.113 [2024-10-07 13:36:27.547694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.547713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.547739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.547758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.547771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.547784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.547818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.547837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.547851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.547863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.547886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.557335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.557675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.557707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.113 [2024-10-07 13:36:27.557730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.557796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.557832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.557862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.557878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.557891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.558088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.558266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.558293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.113 [2024-10-07 13:36:27.558310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.558360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.558389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.558405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.558418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.558443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.571898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.571931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.572050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.572081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.113 [2024-10-07 13:36:27.572099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.572183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.572209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.113 [2024-10-07 13:36:27.572225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.572250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.572271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.572293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.572308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.572321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.572338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.572352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.572371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.572397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.572429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.587164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.587197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.113 [2024-10-07 13:36:27.587978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.588010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.113 [2024-10-07 13:36:27.588027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.588140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.113 [2024-10-07 13:36:27.588166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.113 [2024-10-07 13:36:27.588183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.113 [2024-10-07 13:36:27.588270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.588297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.113 [2024-10-07 13:36:27.588319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.588335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.588349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.588366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.113 [2024-10-07 13:36:27.588380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.113 [2024-10-07 13:36:27.588393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.113 [2024-10-07 13:36:27.588417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.588433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.113 [2024-10-07 13:36:27.601498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.601533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.601985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.602017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.114 [2024-10-07 13:36:27.602034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.602133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.602164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.114 [2024-10-07 13:36:27.602180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.602385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.602421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.602903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.602938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.602952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.602983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.603004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.603016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.603263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.603296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.613279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.613312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.613550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.613581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.114 [2024-10-07 13:36:27.613598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.613708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.613736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.114 [2024-10-07 13:36:27.613753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.114 8398.17 IOPS, 32.81 MiB/s [2024-10-07T11:36:37.826Z] [2024-10-07 13:36:27.615456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.615483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.615596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.615618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.615633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.615651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.615671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.615686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.618688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.618717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.623753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.623785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.623927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.623966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.114 [2024-10-07 13:36:27.623989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.624106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.624133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.114 [2024-10-07 13:36:27.624149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.624174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.624196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.624217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.624232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.624245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.624262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.624276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.624289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.624314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.624330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.634039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.634071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.634218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.634248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.114 [2024-10-07 13:36:27.634265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.634374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.634401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.114 [2024-10-07 13:36:27.634417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.634602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.634647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.634882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.634907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.634922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.634940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.634955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.114 [2024-10-07 13:36:27.634973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.114 [2024-10-07 13:36:27.635040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.635061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.114 [2024-10-07 13:36:27.648371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.648404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.114 [2024-10-07 13:36:27.648904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.648935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.114 [2024-10-07 13:36:27.648952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.649053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.114 [2024-10-07 13:36:27.649080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.114 [2024-10-07 13:36:27.649096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.114 [2024-10-07 13:36:27.649313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.649343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.114 [2024-10-07 13:36:27.649543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.114 [2024-10-07 13:36:27.649567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.649581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.649598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.649613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.649626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.649702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.649738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.663971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.664003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.664152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.664182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.115 [2024-10-07 13:36:27.664199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.664304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.664331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.115 [2024-10-07 13:36:27.664347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.664373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.664395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.664422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.664438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.664451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.664468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.664482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.664495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.664519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.664535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.676727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.676761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.676977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.677007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.115 [2024-10-07 13:36:27.677024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.677112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.677141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.115 [2024-10-07 13:36:27.677157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.677280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.677307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.677419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.677440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.677453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.677469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.677483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.677495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.680692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.680719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.687071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.687116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.687334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.687363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.115 [2024-10-07 13:36:27.687385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.687491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.687518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.115 [2024-10-07 13:36:27.687534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.687560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.687581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.687602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.687618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.687631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.687648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.687663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.687687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.687713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.687730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.698277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.698309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.698964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.699006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.115 [2024-10-07 13:36:27.699023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.699126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.699153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.115 [2024-10-07 13:36:27.699170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.699410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.699440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.699555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.699579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.699593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.699611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.699625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.699637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.699700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.699722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.710598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.710631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.710749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.710779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.115 [2024-10-07 13:36:27.710796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.710908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.115 [2024-10-07 13:36:27.710935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.115 [2024-10-07 13:36:27.710950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.115 [2024-10-07 13:36:27.711217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.711245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.115 [2024-10-07 13:36:27.711478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.711502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.711517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.711534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.115 [2024-10-07 13:36:27.711548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.115 [2024-10-07 13:36:27.711561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.115 [2024-10-07 13:36:27.711612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.711633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.115 [2024-10-07 13:36:27.721712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.115 [2024-10-07 13:36:27.721746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.721963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.721993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.116 [2024-10-07 13:36:27.722010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.722085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.722111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.722127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.722237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.722264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.723431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.723460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.723482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.723499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.723512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.723524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.724824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.724850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.731827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.731873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.732038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.732067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.732084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.732502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.732532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.116 [2024-10-07 13:36:27.732548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.732567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.732711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.732736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.732749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.732762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.732881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.732905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.732919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.732938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.733066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.742101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.742133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.742247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.742278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.742295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.742459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.742486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.116 [2024-10-07 13:36:27.742503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.742709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.742752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.743334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.743358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.743377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.743393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.743407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.743419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.743681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.743706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.752637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.752677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.752819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.752847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.116 [2024-10-07 13:36:27.752865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.752974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.753000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.753016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.754964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.754996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.755747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.755772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.755786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.755804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.755818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.755830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.756361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.756390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.762761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.762808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.762972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.763002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.763018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.765674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.765706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.116 [2024-10-07 13:36:27.765724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.765743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.765893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.116 [2024-10-07 13:36:27.765920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.765934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.765948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.766107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.766132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.116 [2024-10-07 13:36:27.766146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.116 [2024-10-07 13:36:27.766160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.116 [2024-10-07 13:36:27.766265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.116 [2024-10-07 13:36:27.773040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.773072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.116 [2024-10-07 13:36:27.773209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.773239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.116 [2024-10-07 13:36:27.773256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.116 [2024-10-07 13:36:27.773371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.116 [2024-10-07 13:36:27.773397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.773413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.773438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.773459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.773481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.773501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.773516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.773533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.773547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.773560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.773584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.773601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.783848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.783882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.783988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.784019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.784036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.784111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.784138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.117 [2024-10-07 13:36:27.784154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.784179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.784201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.784221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.784236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.784249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.784266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.784281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.784294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.784318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.784335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.795760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.795793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.796056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.796086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.117 [2024-10-07 13:36:27.796104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.796213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.796246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.796263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.798330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.798363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.799169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.799192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.799213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.799229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.799243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.799255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.799596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.799636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.806068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.806098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.806301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.806347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.806365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.806454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.806481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.117 [2024-10-07 13:36:27.806498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.806605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.806632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.809239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.809265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.809286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.809304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.809319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.809331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.809840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.809866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.816316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.816348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.816482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.816512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.117 [2024-10-07 13:36:27.816529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.816612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.816639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.816655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.816689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.816712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.816733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.816748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.816761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.816778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.816792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.816805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.816829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.816846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.117 [2024-10-07 13:36:27.829231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.829264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.117 [2024-10-07 13:36:27.829412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.829442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.117 [2024-10-07 13:36:27.829459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.829571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.117 [2024-10-07 13:36:27.829598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.117 [2024-10-07 13:36:27.829614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.117 [2024-10-07 13:36:27.830104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.830133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.117 [2024-10-07 13:36:27.830370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.830394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.117 [2024-10-07 13:36:27.830414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.117 [2024-10-07 13:36:27.830432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.117 [2024-10-07 13:36:27.830447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.830460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.830780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.830805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.845022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.845071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.845236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.845266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.118 [2024-10-07 13:36:27.845283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.845370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.845397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.118 [2024-10-07 13:36:27.845413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.845439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.845461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.845482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.845497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.845510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.845527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.845542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.845555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.845579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.845610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.856894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.856927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.857149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.857179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.118 [2024-10-07 13:36:27.857197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.857273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.857300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.118 [2024-10-07 13:36:27.857322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.857431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.857459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.860246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.860282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.860296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.860313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.860327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.860343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.861298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.861323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.867006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.867050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.867235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.867263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.118 [2024-10-07 13:36:27.867280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.867372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.867398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.118 [2024-10-07 13:36:27.867414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.867432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.867589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.867616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.867630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.867643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.867809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.867833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.867847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.867861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.867979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.877170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.877208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.877352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.877380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.118 [2024-10-07 13:36:27.877396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.877503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.877529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.118 [2024-10-07 13:36:27.877545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.877741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.877769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.877817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.877837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.877851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.877869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.877883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.877896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.878078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.878101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.890792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.890824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.890956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.890985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.118 [2024-10-07 13:36:27.891001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.891085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.891111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.118 [2024-10-07 13:36:27.891127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.118 [2024-10-07 13:36:27.891152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.891174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.118 [2024-10-07 13:36:27.891194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.891210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.891224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.891246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.118 [2024-10-07 13:36:27.891276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.118 [2024-10-07 13:36:27.891290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.118 [2024-10-07 13:36:27.891316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.891349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.118 [2024-10-07 13:36:27.900904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.900969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.118 [2024-10-07 13:36:27.901153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.118 [2024-10-07 13:36:27.901181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.119 [2024-10-07 13:36:27.901198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.901314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.901340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.119 [2024-10-07 13:36:27.901356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.901375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.901401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.901419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.901432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.901446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.901471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.901488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.901500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.901514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.904044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.910993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.911171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.911201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.119 [2024-10-07 13:36:27.911218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.911257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.911290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.911319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.911336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.911355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.911379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.911482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.911508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.119 [2024-10-07 13:36:27.911524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.911548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.911572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.911587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.911599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.911623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.924261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.924295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.924506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.924535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.119 [2024-10-07 13:36:27.924552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.924630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.924656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.119 [2024-10-07 13:36:27.924682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.924709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.924732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.924975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.924998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.925012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.925029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.925060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.925074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.925141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.925162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.938215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.938250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.939537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.939570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.119 [2024-10-07 13:36:27.939587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.939692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.939719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.119 [2024-10-07 13:36:27.939735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.940311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.940340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.940598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.940624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.940640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.940658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.940682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.940696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.940749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.940770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.948789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.948822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.119 [2024-10-07 13:36:27.949045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.949075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.119 [2024-10-07 13:36:27.949092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.949201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.119 [2024-10-07 13:36:27.949227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.119 [2024-10-07 13:36:27.949244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.119 [2024-10-07 13:36:27.949352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.949379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.119 [2024-10-07 13:36:27.949509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.949529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.949542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.949558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.119 [2024-10-07 13:36:27.949578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.119 [2024-10-07 13:36:27.949592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.119 [2024-10-07 13:36:27.953530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.119 [2024-10-07 13:36:27.953558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.959168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.959200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.959496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.959528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.120 [2024-10-07 13:36:27.959546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.959631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.959656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:27.959684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.960006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.960036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.960172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.960194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.960210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.960228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.960243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.960256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.960293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.960313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.970704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.970738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.970899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.970928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:27.970945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.971024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.971049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.120 [2024-10-07 13:36:27.971066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.971092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.971120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.971142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.971157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.971171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.971188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.971203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.971216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.971240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.971257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.983357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.983390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.983681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.983711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.120 [2024-10-07 13:36:27.983729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.983865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.983890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:27.983906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.984945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.984991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.986205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.986230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.986244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.986260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.986273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.986285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.986445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.986468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.993470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.995579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:27.995746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.995785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:27.995803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.996808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:27.996839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.120 [2024-10-07 13:36:27.996857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:27.996876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.997143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:27.997169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.997183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.997197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.997401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:27.997426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:27.997439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:27.997453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:27.997574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:28.003556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:28.003702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:28.003732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:28.003750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:28.003789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:28.003817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:28.003832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:28.003847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:28.003872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:28.011593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:28.011903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:28.011936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.120 [2024-10-07 13:36:28.011954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:28.012267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:28.012444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:28.012476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:28.012492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.120 [2024-10-07 13:36:28.012599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.120 [2024-10-07 13:36:28.013640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.120 [2024-10-07 13:36:28.013787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.120 [2024-10-07 13:36:28.013815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.120 [2024-10-07 13:36:28.013832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.120 [2024-10-07 13:36:28.013857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.120 [2024-10-07 13:36:28.013881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.120 [2024-10-07 13:36:28.013896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.120 [2024-10-07 13:36:28.013910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.013934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.024688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.024912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.025055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.025085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.025102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.025377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.025407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.121 [2024-10-07 13:36:28.025424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.025443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.025495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.025517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.025531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.025546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.025738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.025778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.025792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.025805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.025871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.034777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.035631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.035663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.035691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.040612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.040798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.040835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.040852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.040865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.040890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.041006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.041033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.121 [2024-10-07 13:36:28.041050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.041076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.041100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.041116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.041129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.041153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.046923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.047226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.047259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.047277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.047304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.047329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.047344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.047357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.047381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.050882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.051060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.051088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.121 [2024-10-07 13:36:28.051105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.051136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.051161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.051176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.051190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.051213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.057982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.058124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.058153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.058171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.058626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.058893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.058919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.058935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.058986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.060984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.061197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.061224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.121 [2024-10-07 13:36:28.061241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.061268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.061292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.061307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.061322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.061346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.069170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.069427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.069459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.069478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.069597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.069716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.069738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.069758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.069880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.074646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.074826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.074856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.121 [2024-10-07 13:36:28.074873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.075072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.075141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.075176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.121 [2024-10-07 13:36:28.075191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.121 [2024-10-07 13:36:28.075217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.121 [2024-10-07 13:36:28.079258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.121 [2024-10-07 13:36:28.079498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.121 [2024-10-07 13:36:28.079529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.121 [2024-10-07 13:36:28.079546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.121 [2024-10-07 13:36:28.079572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.121 [2024-10-07 13:36:28.079596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.121 [2024-10-07 13:36:28.079611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.079624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.079648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.088599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.088762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.088791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.088808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.088834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.088858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.088873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.088887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.088912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.089338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.089475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.089502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.122 [2024-10-07 13:36:28.089519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.089544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.089568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.089583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.089596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.089620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.102287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.102321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.102462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.102492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.102508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.102618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.102644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.122 [2024-10-07 13:36:28.102660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.102697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.102720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.102741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.102756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.102769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.102785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.102800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.102814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.102838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.102854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.115393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.115427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.115621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.115649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.122 [2024-10-07 13:36:28.115678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.115803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.115829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.115845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.115953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.115979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.116101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.116137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.116150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.116167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.116181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.116193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.117317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.117343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.125506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.125554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.125716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.125745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.125761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.125852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.125877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.122 [2024-10-07 13:36:28.125893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.125912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.125938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.125956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.125970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.125983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.126008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.126025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.126038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.126052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.126080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.135606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.135781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.135811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.135828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.136101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.136179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.136213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.136245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.136259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.136443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.136570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.136598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.122 [2024-10-07 13:36:28.136615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.136676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.122 [2024-10-07 13:36:28.137142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.122 [2024-10-07 13:36:28.137166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.122 [2024-10-07 13:36:28.137180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.122 [2024-10-07 13:36:28.137412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.122 [2024-10-07 13:36:28.149282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.149317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.122 [2024-10-07 13:36:28.149982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.122 [2024-10-07 13:36:28.150015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.122 [2024-10-07 13:36:28.150033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.122 [2024-10-07 13:36:28.150147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.150173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.123 [2024-10-07 13:36:28.150189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.150568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.150599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.150692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.150720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.150735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.150752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.150768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.150781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.150979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.151002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.162205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.162239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.162462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.162492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.123 [2024-10-07 13:36:28.162509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.162643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.162678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.123 [2024-10-07 13:36:28.162697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.162807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.162834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.165018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.165045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.165059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.165077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.165091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.165104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.165989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.166015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.172319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.172365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.172527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.172555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.123 [2024-10-07 13:36:28.172572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.172706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.172733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.123 [2024-10-07 13:36:28.172749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.172768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.172795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.172814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.172827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.172840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.172865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.172882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.172896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.172910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.172933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.182403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.182528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.182557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.123 [2024-10-07 13:36:28.182574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.182786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.182877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.182911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.182928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.182942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.182967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.183085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.183112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.123 [2024-10-07 13:36:28.183130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.183587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.183839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.183865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.183880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.183932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.195502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.195536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.196111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.196157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.123 [2024-10-07 13:36:28.196176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.196290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.123 [2024-10-07 13:36:28.196316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.123 [2024-10-07 13:36:28.196332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.123 [2024-10-07 13:36:28.196553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.196581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.123 [2024-10-07 13:36:28.196802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.196826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.196840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.196858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.123 [2024-10-07 13:36:28.196872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.123 [2024-10-07 13:36:28.196886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.123 [2024-10-07 13:36:28.197158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.197182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.123 [2024-10-07 13:36:28.211547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.211581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.123 [2024-10-07 13:36:28.212216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.212248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.212265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.212342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.212367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.212383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.212808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.212837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.213068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.213094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.213114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.213133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.213148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.213161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.213226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.213261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.225454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.225488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.226460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.226492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.226509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.226625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.226651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.226674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.227092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.227137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.227362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.227388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.227403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.227421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.227436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.227466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.227547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.227568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.236564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.236597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.236830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.236860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.236877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.236987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.237012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.237034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.237144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.237171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.237301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.237321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.237334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.237351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.237365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.237376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.237476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.237496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.246698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.246746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.246867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.246895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.246912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.247018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.247043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.247059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.247077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.247102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.247120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.247133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.247146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.247171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.247188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.247200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.247213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.247234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.257087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.257126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.257261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.257289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.257306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.257416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.257442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.257458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.257484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.257506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.257527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.257543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.257557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.257574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.124 [2024-10-07 13:36:28.257589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.124 [2024-10-07 13:36:28.257602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.124 [2024-10-07 13:36:28.257642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.257659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.124 [2024-10-07 13:36:28.267708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.267742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.124 [2024-10-07 13:36:28.270435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.270468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.124 [2024-10-07 13:36:28.270486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.270624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.124 [2024-10-07 13:36:28.270649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.124 [2024-10-07 13:36:28.270674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.124 [2024-10-07 13:36:28.272217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.124 [2024-10-07 13:36:28.272249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.272305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.272325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.272339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.272362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.272378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.272390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.272415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.272431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.277823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.277871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.278055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.278083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.125 [2024-10-07 13:36:28.278100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.278212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.278239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.125 [2024-10-07 13:36:28.278255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.278274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.278300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.278319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.278332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.278345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.278371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.278388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.278401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.278414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.278453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.287911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.288260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.288292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.125 [2024-10-07 13:36:28.288310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.288376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.288569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.288605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.288647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.288662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.288739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.288856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.288883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.125 [2024-10-07 13:36:28.288899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.289083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.289170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.289191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.289205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.289230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.302275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.302309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.302425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.302454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.125 [2024-10-07 13:36:28.302471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.302560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.302586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.125 [2024-10-07 13:36:28.302602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.302627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.302648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.302682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.302712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.302726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.302744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.302759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.302772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.302795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.302812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.316340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.316374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.316503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.316533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.125 [2024-10-07 13:36:28.316551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.316637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.316664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.125 [2024-10-07 13:36:28.316691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.316718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.316739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.316760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.316775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.316789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.316805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.316819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.316834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.316859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.316890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.125 [2024-10-07 13:36:28.330868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.330902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.125 [2024-10-07 13:36:28.331488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.331520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.125 [2024-10-07 13:36:28.331538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.331650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.125 [2024-10-07 13:36:28.331688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.125 [2024-10-07 13:36:28.331706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.125 [2024-10-07 13:36:28.331927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.331968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.125 [2024-10-07 13:36:28.332017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.332043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.332058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.332075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.125 [2024-10-07 13:36:28.332095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.125 [2024-10-07 13:36:28.332111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.125 [2024-10-07 13:36:28.332137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.332154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.344214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.344249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.344507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.344536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.344554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.344677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.344714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.126 [2024-10-07 13:36:28.344730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.346888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.346921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.347910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.347935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.347949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.347967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.347995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.348008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.348470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.348494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.354330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.354376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.354493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.354520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.126 [2024-10-07 13:36:28.354536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.354689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.354716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.354732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.354757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.354785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.354803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.354815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.354829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.354854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.354871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.354884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.354896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.354919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.364537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.364585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.364707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.364736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.126 [2024-10-07 13:36:28.364753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.364864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.364890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.364906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.365106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.365149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.365226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.365248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.365262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.365296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.365311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.365323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.365348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.365365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.377902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.377936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.378367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.378400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.378418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.378548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.378574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.126 [2024-10-07 13:36:28.378590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.378806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.378835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.379025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.379048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.379062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.379081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.379096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.379109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.379153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.379174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.391801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.391835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.392175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.392206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.126 [2024-10-07 13:36:28.392224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.392315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.392341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.392358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.392638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.392680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.126 [2024-10-07 13:36:28.392891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.392916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.392931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.392950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.126 [2024-10-07 13:36:28.392970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.126 [2024-10-07 13:36:28.392984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.126 [2024-10-07 13:36:28.393187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.393210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.126 [2024-10-07 13:36:28.406573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.406621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.126 [2024-10-07 13:36:28.406743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.406771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.126 [2024-10-07 13:36:28.406789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.126 [2024-10-07 13:36:28.406886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.126 [2024-10-07 13:36:28.406912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.127 [2024-10-07 13:36:28.406928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.406955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.406976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.406997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.407013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.407026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.407043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.407057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.407070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.407094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.407111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.422693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.422726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.422939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.422969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.127 [2024-10-07 13:36:28.422986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.423128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.423154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.127 [2024-10-07 13:36:28.423170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.423632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.423690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.424009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.424035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.424065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.424084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.424098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.424110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.424347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.424371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.437106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.437139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.437306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.437337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.127 [2024-10-07 13:36:28.437355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.437463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.437491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.127 [2024-10-07 13:36:28.437507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.437532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.437553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.437574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.437589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.437602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.437619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.437633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.437646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.437684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.437704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.448343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.448377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.448622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.448654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.127 [2024-10-07 13:36:28.448687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.448780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.448809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.127 [2024-10-07 13:36:28.448826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.448940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.448968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.449104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.449128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.449143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.449161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.449191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.449204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.449354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.449378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.458456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.458502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.458684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.458714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.127 [2024-10-07 13:36:28.458732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.458824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.458851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.127 [2024-10-07 13:36:28.458868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.127 [2024-10-07 13:36:28.458887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.458913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.127 [2024-10-07 13:36:28.458931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.458945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.458958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.458998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.459016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.127 [2024-10-07 13:36:28.459034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.127 [2024-10-07 13:36:28.459047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.127 [2024-10-07 13:36:28.459071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.127 [2024-10-07 13:36:28.469697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.469732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.127 [2024-10-07 13:36:28.470049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.127 [2024-10-07 13:36:28.470080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.127 [2024-10-07 13:36:28.470098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.470233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.470260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.470276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.470326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.470351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.470373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.470389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.470402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.470419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.470434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.470446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.470472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.470488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.480065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.480097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.482877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.482910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.482929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.483056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.483082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.128 [2024-10-07 13:36:28.483097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.484126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.484156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.484807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.484833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.484848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.484865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.484880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.484892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.485124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.485148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.490176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.490221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.490364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.490391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.128 [2024-10-07 13:36:28.490408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.490701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.490730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.490747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.490766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.490923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.490950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.490964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.490978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.491102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.491126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.491140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.491154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.491259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.500361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.500412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.500542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.500571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.128 [2024-10-07 13:36:28.500594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.500923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.500953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.500971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.500990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.501042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.501065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.501078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.501092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.501275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.501315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.501330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.501343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.501405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.514064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.514098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.514240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.514270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.128 [2024-10-07 13:36:28.514288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.514369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.514396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.514413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.514439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.514461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.514482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.514497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.514511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.514528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.514542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.128 [2024-10-07 13:36:28.514556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.128 [2024-10-07 13:36:28.514585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.514603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.128 [2024-10-07 13:36:28.524178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.524225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.128 [2024-10-07 13:36:28.524468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.524497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.128 [2024-10-07 13:36:28.524515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.524628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.128 [2024-10-07 13:36:28.524655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.128 [2024-10-07 13:36:28.524680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.128 [2024-10-07 13:36:28.524701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.527337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.128 [2024-10-07 13:36:28.527366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.128 [2024-10-07 13:36:28.527381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.527395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.530144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.530173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.530187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.530201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.531103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.534502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.534534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.534701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.534732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.129 [2024-10-07 13:36:28.534750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.534859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.534885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.129 [2024-10-07 13:36:28.534902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.534927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.534949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.534976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.534992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.535006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.535023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.535038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.535051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.535075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.535092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.547052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.547085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.547418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.547450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.129 [2024-10-07 13:36:28.547467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.547546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.547572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.129 [2024-10-07 13:36:28.547589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.548101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.548131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.548361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.548386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.548401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.548418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.548433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.548445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.548648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.548683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.557372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.557405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.557661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.557699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.129 [2024-10-07 13:36:28.557728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.557815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.557843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.129 [2024-10-07 13:36:28.557859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.560953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.560986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.562026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.562052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.562066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.562083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.562097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.562110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.562742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.562768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.567484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.567530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.567674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.567705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.129 [2024-10-07 13:36:28.567722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.567835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.567863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.129 [2024-10-07 13:36:28.567879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.567897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.567923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.567942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.567955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.567968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.567993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.568010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.568024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.568037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.570393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.577583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.577790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.577822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.129 [2024-10-07 13:36:28.577840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.577867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.577898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.577999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.129 [2024-10-07 13:36:28.578025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.129 [2024-10-07 13:36:28.578042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.129 [2024-10-07 13:36:28.578057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.578069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.578082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.578375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.578404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.129 [2024-10-07 13:36:28.578473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.129 [2024-10-07 13:36:28.578494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.129 [2024-10-07 13:36:28.578509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.129 [2024-10-07 13:36:28.578534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.129 [2024-10-07 13:36:28.587909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.129 [2024-10-07 13:36:28.590290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.590323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.590341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.591375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.591853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.591890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.591907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.591930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.592172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.592265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.592295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.592321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.592646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.592896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.592929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.592943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.592995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.598140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.598498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.598529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.598546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.598572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.598596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.598611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.598624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.598648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.602616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.602818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.602849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.602867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.602975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.603103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.603125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.603139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.606090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.608448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.608588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.608618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.608636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.608661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.608708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.608729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.608744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.608942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.612723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.612869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.612899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.612915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.612940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.612963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.612978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.612992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.613016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 8406.00 IOPS, 32.84 MiB/s [2024-10-07T11:36:37.842Z] [2024-10-07 13:36:28.623785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.624017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.624167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.624198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.624216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.624511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.624542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.624559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.624578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.624795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.624823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.624837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.624851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.624916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.624949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.624979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.624993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.625017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.634206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.634238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.634430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.634458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.634476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.634592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.634618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.634634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.634659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.634693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.130 [2024-10-07 13:36:28.634716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.634735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.634748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.634765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.130 [2024-10-07 13:36:28.634780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.130 [2024-10-07 13:36:28.634792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.130 [2024-10-07 13:36:28.637342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.637369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.130 [2024-10-07 13:36:28.644320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.644370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.130 [2024-10-07 13:36:28.644526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.644553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.130 [2024-10-07 13:36:28.644570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.644685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.130 [2024-10-07 13:36:28.644711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.130 [2024-10-07 13:36:28.644728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.130 [2024-10-07 13:36:28.644746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.644771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.644790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.644803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.644821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.644847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.644864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.644877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.644890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.644913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.656232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.656267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.656377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.656407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.131 [2024-10-07 13:36:28.656424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.656557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.656583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.131 [2024-10-07 13:36:28.656599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.656624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.656645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.656676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.656693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.656719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.656736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.656750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.656762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.656797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.656814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.666348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.666396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.666529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.666556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.131 [2024-10-07 13:36:28.666572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.666731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.666758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.131 [2024-10-07 13:36:28.666780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.666799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.666826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.666844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.666857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.666871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.669484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.669512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.669527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.669540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.672331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.676722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.676755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.676918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.676947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.131 [2024-10-07 13:36:28.676964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.677072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.677098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.131 [2024-10-07 13:36:28.677113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.677139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.677161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.677182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.677198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.677211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.677228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.677243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.677257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.677282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.677314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.689245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.689284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.689663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.689703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.131 [2024-10-07 13:36:28.689721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.689855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.689881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.131 [2024-10-07 13:36:28.689897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.131 [2024-10-07 13:36:28.690384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.690414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.131 [2024-10-07 13:36:28.690644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.690678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.690696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.690715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.131 [2024-10-07 13:36:28.690731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.131 [2024-10-07 13:36:28.690744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.131 [2024-10-07 13:36:28.690947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.690970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.131 [2024-10-07 13:36:28.702270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.702302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.131 [2024-10-07 13:36:28.702645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.131 [2024-10-07 13:36:28.702684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.131 [2024-10-07 13:36:28.702704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.702813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.702839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.702855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.702906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.702931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.702953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.702968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.702981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.703004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.703020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.703034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.703183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.703207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.714657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.714698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.714916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.714944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.714961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.715072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.715099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.132 [2024-10-07 13:36:28.715115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.715224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.715251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.715369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.715389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.715417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.715434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.715448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.715461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.715561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.715581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.724781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.724829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.724969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.724998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.132 [2024-10-07 13:36:28.725015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.725123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.725148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.725170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.725190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.725216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.725235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.725249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.725262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.725288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.725305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.725319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.725332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.725355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.734869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.735022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.735052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.132 [2024-10-07 13:36:28.735068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.735282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.735358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.735405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.735423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.735437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.735462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.735578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.735604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.735621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.735815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.735872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.735893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.735907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.735931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.748349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.748383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.748781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.748813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.132 [2024-10-07 13:36:28.748831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.748914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.748939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.748955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.749244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.749290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.749524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.749550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.749565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.749583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.749598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.132 [2024-10-07 13:36:28.749612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.132 [2024-10-07 13:36:28.749856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.749880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.132 [2024-10-07 13:36:28.759039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.759072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.132 [2024-10-07 13:36:28.759331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.759362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.132 [2024-10-07 13:36:28.759380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.759486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.132 [2024-10-07 13:36:28.759512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.132 [2024-10-07 13:36:28.759530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.132 [2024-10-07 13:36:28.759638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.759673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.132 [2024-10-07 13:36:28.759793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.132 [2024-10-07 13:36:28.759815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.759829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.759847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.759867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.759881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.760080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.760102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.770698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.770733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.770957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.770988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.133 [2024-10-07 13:36:28.771006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.771118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.771144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.133 [2024-10-07 13:36:28.771161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.771269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.771297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.771331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.771365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.771378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.771395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.771424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.771436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.771462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.771477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.781518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.781552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.781695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.781725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.133 [2024-10-07 13:36:28.781742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.781823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.781849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.133 [2024-10-07 13:36:28.781865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.781896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.781919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.781940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.781956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.781969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.781986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.782000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.782013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.782037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.782054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.792264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.792298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.792435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.792464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.133 [2024-10-07 13:36:28.792481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.792593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.792618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.133 [2024-10-07 13:36:28.792634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.792833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.792862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.792911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.792931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.792945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.792963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.792978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.792991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.793172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.793195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.805715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.805749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.805977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.806011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.133 [2024-10-07 13:36:28.806030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.806118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.806144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.133 [2024-10-07 13:36:28.806161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.806332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.806360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.806424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.806444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.806457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.806490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.806505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.806519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.806543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.806560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.821999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.822032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.133 [2024-10-07 13:36:28.822274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.822303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.133 [2024-10-07 13:36:28.822319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.822401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.133 [2024-10-07 13:36:28.822427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.133 [2024-10-07 13:36:28.822443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.133 [2024-10-07 13:36:28.822706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.822735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.133 [2024-10-07 13:36:28.822874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.822897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.822911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.822930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.133 [2024-10-07 13:36:28.822945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.133 [2024-10-07 13:36:28.822964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.133 [2024-10-07 13:36:28.823013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.823035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.133 [2024-10-07 13:36:28.835106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.835141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.835283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.835311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.835327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.835433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.835458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.835473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.836747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.836779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.836817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.836836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.836850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.836868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.836883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.836895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.836919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.836935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.850743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.850778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.850882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.850910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.850928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.851059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.851084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.851100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.851126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.851153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.851175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.851190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.851203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.851221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.851236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.851248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.851272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.851289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.863478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.863512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.865536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.865569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.865587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.865707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.865733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.865749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.866462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.866492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.866904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.866932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.866947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.866964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.866992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.867004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.867233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.867259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.873592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.873637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.873811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.873840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.873864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.876615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.876647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.876664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.876694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.877494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.877521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.877535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.877547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.877743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.877767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.877781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.877795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.877902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.883798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.883831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.883947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.883975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.883992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.884079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.884105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.884121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.884146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.884167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.134 [2024-10-07 13:36:28.884188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.884203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.884216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.884233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.134 [2024-10-07 13:36:28.884247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.134 [2024-10-07 13:36:28.884269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.134 [2024-10-07 13:36:28.884295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.884312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.134 [2024-10-07 13:36:28.895796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.895828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.134 [2024-10-07 13:36:28.895935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.895962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.134 [2024-10-07 13:36:28.895978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.896064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.134 [2024-10-07 13:36:28.896091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.134 [2024-10-07 13:36:28.896108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.134 [2024-10-07 13:36:28.896133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.896155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.896176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.896191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.896204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.896222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.896235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.896248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.896272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.896288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.907842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.907876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.908111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.908141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.135 [2024-10-07 13:36:28.908158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.908364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.908392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.135 [2024-10-07 13:36:28.908408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.908532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.908559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.908706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.908730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.908744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.908761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.908776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.908789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.908896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.908917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.917957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.918007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.918165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.918195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.135 [2024-10-07 13:36:28.918212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.919305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.919335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.135 [2024-10-07 13:36:28.919352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.919371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.919577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.919605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.919620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.919633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.919765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.919791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.919806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.919820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.919931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.928225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.928395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.928426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.135 [2024-10-07 13:36:28.928443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.928475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.928507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.928628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.928655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.135 [2024-10-07 13:36:28.928682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.928699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.928713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.928726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.928911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.928940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.929005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.929041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.929055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.929096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.942172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.942205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.942598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.942630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.135 [2024-10-07 13:36:28.942648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.942750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.942776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.135 [2024-10-07 13:36:28.942792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.943264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.943293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.943537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.943562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.943577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.943594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.943609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.943621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.135 [2024-10-07 13:36:28.943839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.943865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.135 [2024-10-07 13:36:28.953058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.953091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.135 [2024-10-07 13:36:28.953245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.953276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.135 [2024-10-07 13:36:28.953294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.953414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.135 [2024-10-07 13:36:28.953439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.135 [2024-10-07 13:36:28.953455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.135 [2024-10-07 13:36:28.954296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.954327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.135 [2024-10-07 13:36:28.956909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.135 [2024-10-07 13:36:28.956934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.135 [2024-10-07 13:36:28.956948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.956979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.956994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.957006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.957302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.957328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.963476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.963508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.963755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.963786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.136 [2024-10-07 13:36:28.963804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.963882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.963907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.136 [2024-10-07 13:36:28.963923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.964041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.964070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.964172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.964199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.964214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.964232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.964247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.964259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.966576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.966603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.973587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.973633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.973820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.973850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.136 [2024-10-07 13:36:28.973867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.973985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.974013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.136 [2024-10-07 13:36:28.974029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.974048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.974074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.974092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.974105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.974118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.974143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.974160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.974173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.974186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.974208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.985583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.985617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.985998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.986030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.136 [2024-10-07 13:36:28.986047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.986162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.986187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.136 [2024-10-07 13:36:28.986203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.986254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.986279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.986300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.986316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.986329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.986346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.986360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.986373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.986397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.986413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.995707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.995755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:28.995933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.995962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.136 [2024-10-07 13:36:28.995980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.996072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:28.996099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.136 [2024-10-07 13:36:28.996116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:28.996134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.996161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:28.996179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.996192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.996205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.996230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:28.996247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.136 [2024-10-07 13:36:28.996260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.136 [2024-10-07 13:36:28.996273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.136 [2024-10-07 13:36:28.996301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.136 [2024-10-07 13:36:29.008282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:29.008317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.136 [2024-10-07 13:36:29.008593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:29.008624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.136 [2024-10-07 13:36:29.008641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:29.008759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.136 [2024-10-07 13:36:29.008786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.136 [2024-10-07 13:36:29.008803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.136 [2024-10-07 13:36:29.008976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.136 [2024-10-07 13:36:29.009006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.009067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.009104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.009118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.009135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.009150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.009163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.009188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.009204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.019111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.019144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.019332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.019361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.137 [2024-10-07 13:36:29.019378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.019523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.019550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.137 [2024-10-07 13:36:29.019566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.019592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.019614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.019635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.019650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.019679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.019699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.019715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.019728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.019753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.019770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.031029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.031063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.031359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.031391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.137 [2024-10-07 13:36:29.031409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.031515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.031542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.137 [2024-10-07 13:36:29.031559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.033089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.033120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.033792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.033817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.033831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.033847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.033862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.033875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.034128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.034153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.041375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.041406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.041612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.041642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.137 [2024-10-07 13:36:29.041659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.041774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.041807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.137 [2024-10-07 13:36:29.041824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.041849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.041871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.041892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.041907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.041921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.041938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.041952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.041965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.041990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.042021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.051485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.051531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.051646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.051702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.137 [2024-10-07 13:36:29.051722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.051867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.051895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.137 [2024-10-07 13:36:29.051912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.051931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.052203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.052232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.052246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.052259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.052389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.052414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.052428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.052441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.052546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.063991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.064026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.137 [2024-10-07 13:36:29.064380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.064412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.137 [2024-10-07 13:36:29.064429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.064519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.137 [2024-10-07 13:36:29.064544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.137 [2024-10-07 13:36:29.064560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.137 [2024-10-07 13:36:29.064776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.064806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.137 [2024-10-07 13:36:29.064895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.064919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.064934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.064952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.137 [2024-10-07 13:36:29.064966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.137 [2024-10-07 13:36:29.064979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.137 [2024-10-07 13:36:29.065153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.065177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.137 [2024-10-07 13:36:29.078313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.078347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.079406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.079438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.079455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.079544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.079570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.079586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.079708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.079737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.079760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.079775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.079794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.079813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.079827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.079840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.080050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.080075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.092248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.092282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.092419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.092448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.092465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.092575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.092601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.092617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.092643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.092664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.093017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.093041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.093054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.093072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.093101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.093113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.094058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.094082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.105765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.105799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.106045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.106075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.106093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.106198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.106224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.106246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.107876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.107907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.108419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.108442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.108456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.108472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.108486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.108498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.108747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.108773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.120154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.120186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.120480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.120510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.120527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.120606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.120633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.120649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.121641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.121696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.121813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.121835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.121849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.121866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.121881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.121893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.121918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.121935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.130279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.130333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.130549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.130578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.130595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.130738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.130767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.130783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.130802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.133434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.138 [2024-10-07 13:36:29.133462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.133477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.133490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.135211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.135240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.138 [2024-10-07 13:36:29.135255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.138 [2024-10-07 13:36:29.135268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.138 [2024-10-07 13:36:29.136264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.138 [2024-10-07 13:36:29.140714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.140745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.138 [2024-10-07 13:36:29.140854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.140880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.138 [2024-10-07 13:36:29.140897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.138 [2024-10-07 13:36:29.140983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.138 [2024-10-07 13:36:29.141010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.138 [2024-10-07 13:36:29.141026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.141051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.141072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.141093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.141107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.141121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.141143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.141159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.141172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.141196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.141212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.152812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.152846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.153176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.153208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.139 [2024-10-07 13:36:29.153225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.153334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.153361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.139 [2024-10-07 13:36:29.153377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.153427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.153453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.153475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.153490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.153503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.153520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.153535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.153548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.153706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.153731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.163950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.163985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.164405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.164436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.139 [2024-10-07 13:36:29.164454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.164569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.164595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.139 [2024-10-07 13:36:29.164611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.164674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.164701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.164746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.164766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.164780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.164798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.164813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.164826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.164850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.164867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.175741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.175774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.176204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.176235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.139 [2024-10-07 13:36:29.176252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.176340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.176366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.139 [2024-10-07 13:36:29.176382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.176511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.176540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.178441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.178467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.178481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.178499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.178514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.178527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.179357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.179381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.186043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.186094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.186427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.186457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.139 [2024-10-07 13:36:29.186475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.186590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.186617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.139 [2024-10-07 13:36:29.186633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.186741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.186770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.186793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.186808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.186822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.186838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.186867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.186880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.186905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.186920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.196183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.196228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.139 [2024-10-07 13:36:29.196392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.196421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.139 [2024-10-07 13:36:29.196438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.196523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.139 [2024-10-07 13:36:29.196551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.139 [2024-10-07 13:36:29.196568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.139 [2024-10-07 13:36:29.196586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.196612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.139 [2024-10-07 13:36:29.196631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.139 [2024-10-07 13:36:29.196643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.139 [2024-10-07 13:36:29.196656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.139 [2024-10-07 13:36:29.196687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.139 [2024-10-07 13:36:29.196712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.196726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.196739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.196762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.209408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.209442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.209643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.209682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.140 [2024-10-07 13:36:29.209701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.209810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.209838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.140 [2024-10-07 13:36:29.209854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.210112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.210157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.210675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.210700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.210714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.210730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.210745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.210758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.210989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.211014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.225312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.225347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.226128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.226159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.140 [2024-10-07 13:36:29.226177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.226321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.226349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.140 [2024-10-07 13:36:29.226365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.226821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.226858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.226920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.226940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.226953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.226971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.226985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.226998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.227455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.227478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.236481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.236513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.239106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.140 [2024-10-07 13:36:29.239124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.239232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.239257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.140 [2024-10-07 13:36:29.239272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.240622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.240675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.241253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.241277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.241290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.241307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.241320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.241333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.241421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.241442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.246594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.246639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.246803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.246837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.140 [2024-10-07 13:36:29.246855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.246977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.247004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.140 [2024-10-07 13:36:29.247020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.247039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.247065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.247084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.247097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.247110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.249119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.249146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.249161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.249174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.140 [2024-10-07 13:36:29.249509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.140 [2024-10-07 13:36:29.256719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.256870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.256900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.140 [2024-10-07 13:36:29.256917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.256944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.140 [2024-10-07 13:36:29.256976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.140 [2024-10-07 13:36:29.257197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.140 [2024-10-07 13:36:29.257225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.140 [2024-10-07 13:36:29.257241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.140 [2024-10-07 13:36:29.257256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.140 [2024-10-07 13:36:29.257269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.140 [2024-10-07 13:36:29.257281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.257306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.257327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.257350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.257370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.257384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.257407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.269915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.269967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.270320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.270351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.141 [2024-10-07 13:36:29.270369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.270505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.270532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.141 [2024-10-07 13:36:29.270548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.270599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.270624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.270646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.270661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.270685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.270703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.270718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.270731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.270755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.270772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.280641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.280682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.280976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.281007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.141 [2024-10-07 13:36:29.281024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.281110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.281138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.141 [2024-10-07 13:36:29.281155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.281263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.281299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.283650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.283685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.283701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.283719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.283734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.283747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.284754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.284779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.291040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.291086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.291255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.291284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.141 [2024-10-07 13:36:29.291301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.291391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.291418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.141 [2024-10-07 13:36:29.291434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.291732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.291759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.141 [2024-10-07 13:36:29.291781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.291796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.291809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.291825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.141 [2024-10-07 13:36:29.291839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.141 [2024-10-07 13:36:29.291850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.141 [2024-10-07 13:36:29.291874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.291890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.141 [2024-10-07 13:36:29.301169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.301219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.141 [2024-10-07 13:36:29.301458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.301488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.141 [2024-10-07 13:36:29.301511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.141 [2024-10-07 13:36:29.301839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.141 [2024-10-07 13:36:29.301868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.141 [2024-10-07 13:36:29.301885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.301904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.302110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.302137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.302152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.302165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.302369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.302394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.302408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.302422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.302471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.314964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.315011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.315579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.315610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.142 [2024-10-07 13:36:29.315627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.315751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.315777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.142 [2024-10-07 13:36:29.315793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.316199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.316229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.316458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.316482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.316496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.316513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.316528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.316541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.316613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.316649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.325312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.325360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.327241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.327275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.142 [2024-10-07 13:36:29.327292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.327379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.327405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.142 [2024-10-07 13:36:29.327421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.329677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.329710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.330420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.330445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.330459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.330477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.330493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.330505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.330862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.330888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.335840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.335871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.336014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.336043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.142 [2024-10-07 13:36:29.336060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.336136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.336162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.142 [2024-10-07 13:36:29.336178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.336712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.336741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.336880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.336904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.336919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.336936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.336951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.336963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.337004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.337024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.346025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.346058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.346194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.346224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.142 [2024-10-07 13:36:29.346241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.346374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.346402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.142 [2024-10-07 13:36:29.346418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.346655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.346694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.346762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.346784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.346798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.346815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.346829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.346842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.347025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.347064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.142 [2024-10-07 13:36:29.358278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.358326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.142 [2024-10-07 13:36:29.358543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.358574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.142 [2024-10-07 13:36:29.358591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.358702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.142 [2024-10-07 13:36:29.358730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.142 [2024-10-07 13:36:29.358747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.142 [2024-10-07 13:36:29.358772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.358794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.142 [2024-10-07 13:36:29.358814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.358830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.358843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.142 [2024-10-07 13:36:29.358860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.142 [2024-10-07 13:36:29.358874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.142 [2024-10-07 13:36:29.358887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.358911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.358928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.374738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.374772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.375186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.375218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.143 [2024-10-07 13:36:29.375235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.375311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.375337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.375354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.375594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.375624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.376220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.376244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.376257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.376273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.376287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.376299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.376552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.376583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.386700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.386734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.386959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.386989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.387006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.387114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.387140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.143 [2024-10-07 13:36:29.387156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.389322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.389354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.390296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.390320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.390334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.390352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.390366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.390379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.390590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.390613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.403270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.403302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.403929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.403961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.143 [2024-10-07 13:36:29.403978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.404089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.404114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.404130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.404487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.404516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.404761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.404792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.404808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.404826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.404840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.404853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.404905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.404925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.418274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.418306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.418703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.418735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.418752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.418840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.418865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.143 [2024-10-07 13:36:29.418881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.419086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.419116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.419316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.419340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.419355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.419373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.419388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.419401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.419465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.419500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.432587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.432620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.433291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.433322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.143 [2024-10-07 13:36:29.433339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.433456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.433496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.433514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.143 [2024-10-07 13:36:29.433746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.433776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.143 [2024-10-07 13:36:29.433977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.434001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.434016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.434033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.143 [2024-10-07 13:36:29.434048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.143 [2024-10-07 13:36:29.434061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.143 [2024-10-07 13:36:29.434292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.434318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.143 [2024-10-07 13:36:29.442878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.445566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.143 [2024-10-07 13:36:29.445682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.143 [2024-10-07 13:36:29.445711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.143 [2024-10-07 13:36:29.445728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.446390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.446420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.144 [2024-10-07 13:36:29.446437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.446457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.448104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.448133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.448148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.448161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.448888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.448914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.448928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.448941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.449207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.452966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.453139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.453168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.144 [2024-10-07 13:36:29.453185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.453210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.453234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.453250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.453263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.453288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.455842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.455971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.456001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.144 [2024-10-07 13:36:29.456024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.456474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.456505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.456535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.456548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.456572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.463306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.463454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.463483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.144 [2024-10-07 13:36:29.463501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.463526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.463551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.463566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.463580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.463605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.467471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.467632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.467675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.144 [2024-10-07 13:36:29.467699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.467725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.467750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.467765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.467778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.467803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.479025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.479074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.479234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.479263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.144 [2024-10-07 13:36:29.479280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.479394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.479420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.144 [2024-10-07 13:36:29.479436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.479455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.479481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.479500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.479513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.479526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.479551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.479568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.479581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.479594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.479616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.144 [2024-10-07 13:36:29.494725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.494758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.144 [2024-10-07 13:36:29.494921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.494951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.144 [2024-10-07 13:36:29.494969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.495081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.144 [2024-10-07 13:36:29.495108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.144 [2024-10-07 13:36:29.495133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.144 [2024-10-07 13:36:29.495160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.495181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.144 [2024-10-07 13:36:29.495204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.495219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.495233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.495250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.144 [2024-10-07 13:36:29.495264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.144 [2024-10-07 13:36:29.495277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.144 [2024-10-07 13:36:29.495301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.495317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.507013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.507048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.509588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.509621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.145 [2024-10-07 13:36:29.509639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.509755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.509782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.509799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.510834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.510865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.511376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.511400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.511421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.511437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.511451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.511463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.511981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.512005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.517131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.517513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.517690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.517719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.517736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.517887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.517916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.145 [2024-10-07 13:36:29.517933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.517952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.517978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.517996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.518010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.518023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.518048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.518066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.518079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.518093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.518115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.527306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.527448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.527479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.527496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.527521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.527546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.527561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.527575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.527894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.527992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.528145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.528173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.145 [2024-10-07 13:36:29.528190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.528394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.528465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.528485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.528514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.528539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.540190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.540222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.540362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.540393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.540410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.540494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.540522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.145 [2024-10-07 13:36:29.540538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.540563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.540584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.540605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.540620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.540633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.540650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.540664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.540689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.540714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.540731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.553593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.553628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.556172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.556205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.145 [2024-10-07 13:36:29.556222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.556329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.556354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.556375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.557436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.557466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.145 [2024-10-07 13:36:29.558087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.558111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.558132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.558148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.145 [2024-10-07 13:36:29.558162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.145 [2024-10-07 13:36:29.558174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.145 [2024-10-07 13:36:29.558453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.558479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.145 [2024-10-07 13:36:29.563719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.563749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.145 [2024-10-07 13:36:29.563909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.145 [2024-10-07 13:36:29.563938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.145 [2024-10-07 13:36:29.563955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.145 [2024-10-07 13:36:29.564033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.564059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.564076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.566379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.566411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.566872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.566897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.566917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.566935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.566966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.566978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.567125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.567148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.573829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.573880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.574015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.574045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.574063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.574374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.574404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.146 [2024-10-07 13:36:29.574420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.574439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.574491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.574513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.574526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.574540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.574565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.574582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.574595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.574608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.574632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.586916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.586949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.587313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.587345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.587364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.587469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.587497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.146 [2024-10-07 13:36:29.587514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.587768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.587798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.587850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.587872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.587886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.587904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.587924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.587938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.588191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.588216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.601006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.601041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.601376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.601408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.146 [2024-10-07 13:36:29.601427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.601534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.601561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.601578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.602099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.602129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.602369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.602394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.602409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.602426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.602442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.602457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.602707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.602732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.612461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.612494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.612798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.612829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.612847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.612933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.612971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.146 [2024-10-07 13:36:29.612987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.613115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.613143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.617368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.617397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.617415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.617433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.146 [2024-10-07 13:36:29.617448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.146 [2024-10-07 13:36:29.617460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.146 [2024-10-07 13:36:29.617960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 [2024-10-07 13:36:29.618000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.146 8442.50 IOPS, 32.98 MiB/s [2024-10-07T11:36:37.858Z] [2024-10-07 13:36:29.622574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.622619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.146 [2024-10-07 13:36:29.622756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.622786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.146 [2024-10-07 13:36:29.622804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.623045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.146 [2024-10-07 13:36:29.623075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.146 [2024-10-07 13:36:29.623092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.146 [2024-10-07 13:36:29.623111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.623260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.146 [2024-10-07 13:36:29.623287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.623302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.623316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.623442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.623466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.623481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.623495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.623605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.632703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.632752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.632858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.632888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.147 [2024-10-07 13:36:29.632905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.633214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.633243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.147 [2024-10-07 13:36:29.633261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.633280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.633527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.633555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.633569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.633584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.633655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.633686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.633716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.633730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.633755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.643396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.643431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.643729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.643762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.147 [2024-10-07 13:36:29.643780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.643895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.643922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.147 [2024-10-07 13:36:29.643939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.644048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.644076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.644181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.644203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.644217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.644236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.644258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.644271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.644380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.644401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.653513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.653562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.654450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.654483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.147 [2024-10-07 13:36:29.654511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.654631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.654657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.147 [2024-10-07 13:36:29.654686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.654706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.654732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.654752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.654766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.654779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.654804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.654823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.654836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.654851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.654874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.664748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.664782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.664947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.664977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.147 [2024-10-07 13:36:29.664995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.665091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.665118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.147 [2024-10-07 13:36:29.665134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.665160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.665187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.665210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.665225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.665240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.665257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.665272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.665285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.665309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.665327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.679677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.679711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.147 [2024-10-07 13:36:29.679933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.679976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.147 [2024-10-07 13:36:29.679994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.680727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.147 [2024-10-07 13:36:29.680759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.147 [2024-10-07 13:36:29.680777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.147 [2024-10-07 13:36:29.680803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.680825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.147 [2024-10-07 13:36:29.680846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.680860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.680873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.680890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.147 [2024-10-07 13:36:29.680905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.147 [2024-10-07 13:36:29.680918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.147 [2024-10-07 13:36:29.680941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.680958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.147 [2024-10-07 13:36:29.690124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.690158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.690443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.690479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.690498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.690636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.690663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.148 [2024-10-07 13:36:29.690695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.690803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.690830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.690981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.691005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.691019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.691036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.691056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.691069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.691205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.691228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.700242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.700289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.700533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.700562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.700579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.700628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.700682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.700714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.700727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.700762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.712535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.712892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.712925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.712944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.713037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.713085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.713104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.713117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.713387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.726899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.727095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.727126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.727145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.727192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.727230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.727249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.727263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.727287] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:56.148 [2024-10-07 13:36:29.727304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.737212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.737390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.737419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.737436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.737883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.737916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.737932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.737946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.737997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.747313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.747513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.747542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.747559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.747584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.747609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.747624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.747637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.747680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.760661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.761033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.761065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.761083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.761294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.761369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.761391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.761405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.761430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.775457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.776503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.776536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.776554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.776954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.777207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.777233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.777249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.777301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.785547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.785703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.785732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.785749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.785774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.785798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.785813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.148 [2024-10-07 13:36:29.785827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.148 [2024-10-07 13:36:29.785852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.148 [2024-10-07 13:36:29.795630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.148 [2024-10-07 13:36:29.795818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.148 [2024-10-07 13:36:29.795852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.148 [2024-10-07 13:36:29.795870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.148 [2024-10-07 13:36:29.797189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.148 [2024-10-07 13:36:29.798193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.148 [2024-10-07 13:36:29.798218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.798231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.799018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.809450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.809757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.809791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.809810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.809862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.809891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.809907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.809921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.809945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.820333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.820534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.820565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.820582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.820703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.820815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.820836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.820851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.823742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.830533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.830749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.830778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.830796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.830821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.830853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.830869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.830882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.830908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.840773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.840915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.840945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.840963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.841147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.841232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.841254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.841269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.841294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.853085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.854901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.854934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.854951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.855413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.855825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.855851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.855866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.856383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.864580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.864848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.864880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.864898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.865008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.865146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.865183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.865197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.865314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.875377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.875610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.875642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.875660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.875779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.875905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.875926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.875939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.876055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.885852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.885979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.886009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.886026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.886211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.886282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.886317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.886332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.886358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.899877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.900028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.149 [2024-10-07 13:36:29.900057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.149 [2024-10-07 13:36:29.900075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.149 [2024-10-07 13:36:29.900101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.149 [2024-10-07 13:36:29.900125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.149 [2024-10-07 13:36:29.900141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.149 [2024-10-07 13:36:29.900154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.149 [2024-10-07 13:36:29.900592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.149 [2024-10-07 13:36:29.915179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.149 [2024-10-07 13:36:29.915755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.915788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.915811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.916030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.916087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.916108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.916122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.916147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.930103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.930332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.930362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.930379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.930405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.930470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.930492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.930506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.930531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.943083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.943494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.943527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.943545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.943763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.943821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.943842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.943856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.943881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.958563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.958970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.959003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.959021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.959539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.959802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.959834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.959850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.960064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.970205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.970434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.970463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.970482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.974266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.974896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.974922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.974936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.975228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.980294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.980470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.980499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.980516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.980542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.980566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.980581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.980594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.980619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:29.990574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:29.990714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:29.990744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:29.990762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:29.990947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:29.991018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:29.991038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:29.991051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:29.991093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:30.004212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:30.004378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:30.004409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:30.004427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:30.004454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:30.004505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:30.004525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:30.004539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:30.004563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:30.014311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:30.014500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:30.014530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:30.014547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:30.014573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:30.014624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:30.014644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:30.014658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:30.014694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:30.024537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:30.024698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:30.024729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:30.024746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:30.024772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:30.024822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:30.024842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:30.024857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.150 [2024-10-07 13:36:30.024882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.150 [2024-10-07 13:36:30.037624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.150 [2024-10-07 13:36:30.037812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.150 [2024-10-07 13:36:30.037843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.150 [2024-10-07 13:36:30.037860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.150 [2024-10-07 13:36:30.037894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.150 [2024-10-07 13:36:30.037919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.150 [2024-10-07 13:36:30.037934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.150 [2024-10-07 13:36:30.037948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.037973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.052381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.052512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.052542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.052559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.052585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.052609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.052624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.052639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.052663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.066494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.068920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.068955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.068973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.069772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.070159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.070183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.070197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.070276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.076583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.076826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.076857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.076875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.076901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.076924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.076939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.076959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.076985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.086734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.086889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.086918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.086937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.086961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.086986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.087001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.087014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.087211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.100151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.100551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.100585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.100603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.100819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.101037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.101063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.101078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.101132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.112814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.116504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.116538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.116556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.117091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.117367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.117393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.117407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.117611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.122901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.123083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.123111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.123128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.123153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.123177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.123192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.123205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.123229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.133021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.133425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.133456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.133474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.133526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.133720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.133744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.133759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.133810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.148503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.148647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.148685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.148704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.148730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.148754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.148769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.148783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.148808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.164002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.164383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.164416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.164434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.164640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.164871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.164896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.151 [2024-10-07 13:36:30.164912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.151 [2024-10-07 13:36:30.164964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.151 [2024-10-07 13:36:30.180196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.151 [2024-10-07 13:36:30.180560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.151 [2024-10-07 13:36:30.180593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.151 [2024-10-07 13:36:30.180611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.151 [2024-10-07 13:36:30.180825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.151 [2024-10-07 13:36:30.180882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.151 [2024-10-07 13:36:30.180902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.180917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.181099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.195857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.196267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.196300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.196318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.196522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.196579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.196599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.196614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.196640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.212155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.212735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.212768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.212786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.213004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.213061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.213082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.213096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.213284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.228222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.228775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.228807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.228825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.229042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.229251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.229276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.229290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.229341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.243259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.243380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.243409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.243426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.243452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.243476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.243491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.243505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.243530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.255986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.256190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.256220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.256238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.256347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.258276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.258303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.258318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.258403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.266222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.266417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.266446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.266468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.266495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.266519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.266534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.266548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.266571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.277305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.277430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.277459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.277477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.277503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.277544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.277563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.277577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.277603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.289493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.289663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.289698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.289715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.289741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.289765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.289780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.289794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.289818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.304249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.304464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.304493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.304510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.304537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.304567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.304583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.304597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.304622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.320343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.320517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.320547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.320564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.320590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.320614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.152 [2024-10-07 13:36:30.320629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.152 [2024-10-07 13:36:30.320643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.152 [2024-10-07 13:36:30.320676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.152 [2024-10-07 13:36:30.331122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.152 [2024-10-07 13:36:30.331379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.152 [2024-10-07 13:36:30.331411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.152 [2024-10-07 13:36:30.331430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.152 [2024-10-07 13:36:30.331540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.152 [2024-10-07 13:36:30.331676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.331699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.331713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.332684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.342315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.342482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.342511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.342528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.342553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.342577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.342592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.342605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.342629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.352405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.352587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.352616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.352634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.352659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.352693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.352719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.352733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.352757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.364552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.365395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.365428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.365446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.365860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.366084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.366109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.366124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.366175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.374639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.374817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.374846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.374862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.374888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.374913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.374929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.374943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.374967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.384822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.384948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.384977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.385005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.385031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.385055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.385071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.385085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.385110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.397722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.398095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.398128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.398146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.398366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.398423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.398444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.398458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.398483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.409661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.411888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.411921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.411939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.412704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.413146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.413172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.413202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.413279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.419758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.421919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.421952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.421970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.422144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.422257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.422283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.422298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.422407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.429843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.430017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.153 [2024-10-07 13:36:30.430046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.153 [2024-10-07 13:36:30.430064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.153 [2024-10-07 13:36:30.430089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.153 [2024-10-07 13:36:30.430114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.153 [2024-10-07 13:36:30.430129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.153 [2024-10-07 13:36:30.430142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.153 [2024-10-07 13:36:30.430166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.153 [2024-10-07 13:36:30.441953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.153 [2024-10-07 13:36:30.442371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.442404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.442422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.442637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.442854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.442880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.442895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.442947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.455768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.456293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.456325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.456342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.456419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.456983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.457010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.457023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.457262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.471725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.471868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.471898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.471915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.471940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.471964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.471979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.471992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.472016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.486390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.486716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.486748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.486766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.486994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.487052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.487072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.487086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.487112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.501541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.501923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.501956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.501974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.502219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.502433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.502458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.502472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.502525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.516453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.516825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.516859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.516877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.517368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.517608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.517634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.517650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.517872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.527757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.527985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.528017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.528035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.530267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.530698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.530724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.530740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.531736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.537845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.537980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.538023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.538040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.538064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.538088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.538103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.538131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.538157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.548178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.548350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.548379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.548396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.548581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.548653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.548698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.548720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.548746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.560468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.560825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.560859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.560877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.561083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.561291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.561316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.561331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.561382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.154 [2024-10-07 13:36:30.574555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.154 [2024-10-07 13:36:30.575273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.154 [2024-10-07 13:36:30.575305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.154 [2024-10-07 13:36:30.575323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.154 [2024-10-07 13:36:30.575729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.154 [2024-10-07 13:36:30.575956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.154 [2024-10-07 13:36:30.575982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.154 [2024-10-07 13:36:30.575997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.154 [2024-10-07 13:36:30.576049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.585179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.585379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.585409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.585427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.585551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.585704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.585727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.585741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.585848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.595297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.595503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.595537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.595556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.595582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.595606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.595621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.595635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.595660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.606060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.606234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.606264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.606281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.606306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.606331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.606347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.606360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.606385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.620722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.621083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.621116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.621134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 8436.33 IOPS, 32.95 MiB/s [2024-10-07T11:36:37.867Z] [2024-10-07 13:36:30.623610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.623767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.623790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.623804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.623829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.632069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.632299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.632330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.632348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.635974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.636609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.636636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.636675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.636937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.642155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.642311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.642340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.642357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.642382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.642407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.642422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.642436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.642928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.652529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.652851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.652886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.652904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.652957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.652986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.653002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.653015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.653462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.667304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.667695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.667730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.667748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.667954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.668012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.668035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.668050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.668081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.681956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.682139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.682170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.682189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.682434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.682497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.682519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.682533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.682728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.697279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.697527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.697557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.697575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.697601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.697643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.697663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.155 [2024-10-07 13:36:30.697688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.155 [2024-10-07 13:36:30.697714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.155 [2024-10-07 13:36:30.707870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.155 [2024-10-07 13:36:30.708161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.155 [2024-10-07 13:36:30.708192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.155 [2024-10-07 13:36:30.708210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.155 [2024-10-07 13:36:30.708319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.155 [2024-10-07 13:36:30.708443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.155 [2024-10-07 13:36:30.708465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.708479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.711583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.717975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.718126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.718155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.718177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.718204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.718228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.718244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.718258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.718282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.728060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.728275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.728305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.728322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.728348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.728372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.728387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.728401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.728426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.740938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.741222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.741254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.741272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.743066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.743800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.743825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.743840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.744130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.751193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.751391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.751421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.751440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.751852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.751906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.751926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.751940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.751965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.762778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.762901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.762931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.762948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.762988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.763020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.763036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.763050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.763075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.774546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.774871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.774903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.774921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.775127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.775193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.775214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.775229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.775254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.784821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.785046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.785077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.785095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.785202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.785325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.785347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.785360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.789450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.794909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.795036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.795066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.795083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.795109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.795133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.795149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.795163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.795187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.804995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.806077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.806110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.806139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.806164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.806187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.806201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.806214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.806237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.820433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.820626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.820658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.820711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.820741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.820765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.156 [2024-10-07 13:36:30.820781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.156 [2024-10-07 13:36:30.820793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.156 [2024-10-07 13:36:30.820819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.156 [2024-10-07 13:36:30.832312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.156 [2024-10-07 13:36:30.832567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.156 [2024-10-07 13:36:30.832598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.156 [2024-10-07 13:36:30.832622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.156 [2024-10-07 13:36:30.832759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.156 [2024-10-07 13:36:30.832873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.832895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.832908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.836039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.842401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.842608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.842637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.842662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.842698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.842734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.842748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.842762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.842786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.852956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.853107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.853138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.853155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.853340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.853414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.853451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.853466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.853491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.868442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.868606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.868636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.868654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.868689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.868740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.868765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.868779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.868805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.880516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.880775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.880806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.880825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.880933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.883078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.883106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.883126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.883738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.891825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.892009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.892039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.892057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.892083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.892108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.892124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.892137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.892161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.902681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.902903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.902934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.902952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.903060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.905015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.905042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.905062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.905160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.912773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.912931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.912960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.912977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.913176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.913247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.913268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.913297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.913323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.926930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.927174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.927206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.927223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.927249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.927274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.927289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.927302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.927328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.941490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.157 [2024-10-07 13:36:30.941645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.157 [2024-10-07 13:36:30.941686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.157 [2024-10-07 13:36:30.941707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.157 [2024-10-07 13:36:30.941733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.157 [2024-10-07 13:36:30.941782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.157 [2024-10-07 13:36:30.941802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.157 [2024-10-07 13:36:30.941815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.157 [2024-10-07 13:36:30.941840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.157 [2024-10-07 13:36:30.954587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:30.954709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:30.954740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:30.954758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:30.954790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:30.954815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:30.954831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:30.954844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:30.954868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:30.969257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:30.969400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:30.969430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:30.969448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:30.969474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:30.969499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:30.969514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:30.969528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:30.969552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:30.984202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:30.984573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:30.984606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:30.984625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:30.984684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:30.984713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:30.984728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:30.984741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:30.984925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:30.998932] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d13cb0 was disconnected and freed. reset controller. 00:25:56.158 [2024-10-07 13:36:30.998968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:30.999379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:30.999481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:30.999584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:30.999614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:30.999631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.000309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.000656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.000694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.158 [2024-10-07 13:36:31.000712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.000727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.000740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.000753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.000960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.000990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.001039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.001060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.001074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.001099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.013581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.014362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.014519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.014550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:31.014568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.015031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.015061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.158 [2024-10-07 13:36:31.015078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.015097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.015316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.015344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.015358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.015371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.015592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.015618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.015633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.015647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.015713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.029864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.029898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.030024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.030054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:31.030071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.030181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.030207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.158 [2024-10-07 13:36:31.030224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.030249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.030271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.030292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.030307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.030321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.030338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.030352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.030365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.030390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.030407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.158 [2024-10-07 13:36:31.043129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.043182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.158 [2024-10-07 13:36:31.043778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.043809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.158 [2024-10-07 13:36:31.043833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.043946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.158 [2024-10-07 13:36:31.043972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.158 [2024-10-07 13:36:31.043988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.158 [2024-10-07 13:36:31.044205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.044234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.158 [2024-10-07 13:36:31.044433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.044463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.158 [2024-10-07 13:36:31.044478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.158 [2024-10-07 13:36:31.044496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.158 [2024-10-07 13:36:31.044510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.044523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.044588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.044622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.053277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.055384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.055498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.055528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.055545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.060044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.060077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.159 [2024-10-07 13:36:31.060096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.060115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.060204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.060229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.060243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.060256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.060282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.060300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.060313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.060326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.060349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.063361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.063567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.063595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.063612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.064965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.065364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.065393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.065407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.065557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.065820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.066017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.066046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.159 [2024-10-07 13:36:31.066063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.067121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.067269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.067293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.067308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.067334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.075999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.076371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.076403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.076421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.076628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.076676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.076979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.077019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.159 [2024-10-07 13:36:31.077036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.077050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.077073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.077085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.077137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.077162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.077185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.077200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.077213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.077237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.088625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.088659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.088962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.089008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.089025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.089106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.089132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.159 [2024-10-07 13:36:31.089148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.089211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.089238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.089261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.089276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.089289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.089306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.089320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.089333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.089359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.089375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.101767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.101801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.102565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.102612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.159 [2024-10-07 13:36:31.102630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.102752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.102780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.102796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.159 [2024-10-07 13:36:31.102870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.102898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.159 [2024-10-07 13:36:31.102920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.102935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.102955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.102973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.159 [2024-10-07 13:36:31.102987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.159 [2024-10-07 13:36:31.103000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.159 [2024-10-07 13:36:31.103024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.103041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.159 [2024-10-07 13:36:31.114467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.114500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.159 [2024-10-07 13:36:31.114718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.159 [2024-10-07 13:36:31.114749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.159 [2024-10-07 13:36:31.114767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.114869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.114896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.114912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.115561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.115590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.116658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.116708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.116730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.116747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.116762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.116774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.117258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.117282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.124759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.124791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.125123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.125154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.125172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.125276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.125303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.160 [2024-10-07 13:36:31.125325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.125452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.125480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.125524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.125544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.125559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.125576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.125590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.125603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.125643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.125659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.136534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.136581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.136809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.136840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.160 [2024-10-07 13:36:31.136858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.136932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.136959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.136976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.137001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.137023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.137044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.137059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.137073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.137090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.137104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.137117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.137142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.137158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.148418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.148458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.148576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.148607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.148624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.148736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.148763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.160 [2024-10-07 13:36:31.148780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.148914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.148943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.149058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.149082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.149097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.149114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.149129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.149141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.149266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.149288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.159518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.159553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.159726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.159757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.160 [2024-10-07 13:36:31.159775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.159855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.159882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.159898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.160157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.160185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.160401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.160425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.160440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.160463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.160478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.160492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.160 [2024-10-07 13:36:31.160574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.160612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.160 [2024-10-07 13:36:31.173809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.173843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.160 [2024-10-07 13:36:31.174471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.174502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.160 [2024-10-07 13:36:31.174520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.174608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.160 [2024-10-07 13:36:31.174633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.160 [2024-10-07 13:36:31.174648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.160 [2024-10-07 13:36:31.175038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.175088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.160 [2024-10-07 13:36:31.175163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.160 [2024-10-07 13:36:31.175182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.160 [2024-10-07 13:36:31.175196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.175214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.175229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.175242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.175424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.175462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.185003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.185036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.185332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.185362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.161 [2024-10-07 13:36:31.185380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.185469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.185497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.161 [2024-10-07 13:36:31.185514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.187756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.187788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.188193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.188232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.188245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.188265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.188293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.188306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.189286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.189325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.195129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.195175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.195357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.195386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.161 [2024-10-07 13:36:31.195403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.195519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.195554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.161 [2024-10-07 13:36:31.195571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.195590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.195616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.195635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.195648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.195661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.199647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.199683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.199700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.199713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.199910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.205216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.205443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.205483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.161 [2024-10-07 13:36:31.205503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.205730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.205790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.205825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.205842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.205855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.205880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.205977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.206005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.161 [2024-10-07 13:36:31.206028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.206054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.206078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.206094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.206107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.206131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.216987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.217291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.217451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.217481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.161 [2024-10-07 13:36:31.217499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.217804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.217834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.161 [2024-10-07 13:36:31.217851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.217870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.217922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.217944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.217957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.217970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.217995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.218018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.161 [2024-10-07 13:36:31.218031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.161 [2024-10-07 13:36:31.218044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.161 [2024-10-07 13:36:31.218571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.161 [2024-10-07 13:36:31.228053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.228087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.161 [2024-10-07 13:36:31.228401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.228431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.161 [2024-10-07 13:36:31.228449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.228560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.161 [2024-10-07 13:36:31.228587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.161 [2024-10-07 13:36:31.228603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.161 [2024-10-07 13:36:31.230876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.230908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.161 [2024-10-07 13:36:31.231704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.231729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.231750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.231783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.231797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.231811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.232458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.232483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.238171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.238215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.238414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.238444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.162 [2024-10-07 13:36:31.238461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.238545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.238573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.162 [2024-10-07 13:36:31.238590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.238608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.238643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.238662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.238689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.238703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.238728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.238746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.238759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.238772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.238810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.248268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.248419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.248450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.162 [2024-10-07 13:36:31.248468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.248620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.248660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.248945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.248975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.162 [2024-10-07 13:36:31.248993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.249008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.249020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.249034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.249201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.249244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.249304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.249325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.249339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.249379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.261589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.261621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.261801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.261836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.162 [2024-10-07 13:36:31.261854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.261967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.261993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.162 [2024-10-07 13:36:31.262009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.262034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.262056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.262077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.262093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.262107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.262123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.262137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.262150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.262175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.262192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.277943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.277976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.278185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.278215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.162 [2024-10-07 13:36:31.278233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.278342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.278368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.162 [2024-10-07 13:36:31.278384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.278410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.278432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.278452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.278467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.278481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.278498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.278513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.278531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.278557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.278574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.292716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.292750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.162 [2024-10-07 13:36:31.292858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.292888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.162 [2024-10-07 13:36:31.292906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.293017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.162 [2024-10-07 13:36:31.293044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.162 [2024-10-07 13:36:31.293060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.162 [2024-10-07 13:36:31.293086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.293108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.162 [2024-10-07 13:36:31.293129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.293144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.293157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.293174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.162 [2024-10-07 13:36:31.293189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.162 [2024-10-07 13:36:31.293202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.162 [2024-10-07 13:36:31.293227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.162 [2024-10-07 13:36:31.293243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.308363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.308412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.308943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.308975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.163 [2024-10-07 13:36:31.308992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.309164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.309191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.309207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.309462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.309498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.309550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.309572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.309586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.309603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.309618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.309630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.309655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.309682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.319992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.320025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.320251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.320281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.320299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.320406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.320433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.163 [2024-10-07 13:36:31.320449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.322305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.322338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.323183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.323207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.323228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.323245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.323260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.323272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.323549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.323574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.330104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.330149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.330354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.330383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.163 [2024-10-07 13:36:31.330406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.330523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.330551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.330567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.330587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.330613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.330631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.330644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.330657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.330692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.330711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.330724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.330737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.330760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.340188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.340338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.340369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.163 [2024-10-07 13:36:31.340387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.340600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.340686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.340735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.340752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.340767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.340791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.340905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.340933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.340950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.340975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.340999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.341014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.341033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.341059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.352799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.352832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.352974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.353004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.163 [2024-10-07 13:36:31.353022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.353102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.353129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.353145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.353171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.353193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.163 [2024-10-07 13:36:31.353214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.353229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.353243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.353260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.163 [2024-10-07 13:36:31.353275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.163 [2024-10-07 13:36:31.353287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.163 [2024-10-07 13:36:31.353313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.353330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.163 [2024-10-07 13:36:31.362928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.362990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.163 [2024-10-07 13:36:31.363154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.363183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.163 [2024-10-07 13:36:31.363201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.163 [2024-10-07 13:36:31.363341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.163 [2024-10-07 13:36:31.363368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.363384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.363402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.366062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.366096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.366112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.366125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.366332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.366373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.366387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.366415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.366551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.373083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.373131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.373256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.373285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.164 [2024-10-07 13:36:31.373303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.373644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.373692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.373709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.373729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.373781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.373804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.373817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.373830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.373855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.373872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.373885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.373898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.373920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.387200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.387234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.387552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.387585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.164 [2024-10-07 13:36:31.387603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.387728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.387755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.387772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.387977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.388009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.388057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.388078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.388092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.388109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.388124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.388137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.388162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.388179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.402214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.402247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.402609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.402642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.402660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.402769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.402795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.164 [2024-10-07 13:36:31.402811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.403035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.403067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.403118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.403138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.403152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.403170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.403184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.403197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.403379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.403423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.418544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.418578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.419132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.419166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.164 [2024-10-07 13:36:31.419198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.419345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.419371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.419387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.419607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.419635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.419848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.419875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.419890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.419908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.419923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.419936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.420138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.420162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.164 [2024-10-07 13:36:31.433329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.433362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.164 [2024-10-07 13:36:31.433496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.433525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.164 [2024-10-07 13:36:31.433542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.433624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.164 [2024-10-07 13:36:31.433650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.164 [2024-10-07 13:36:31.433674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.164 [2024-10-07 13:36:31.433703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.433726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.164 [2024-10-07 13:36:31.433747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.164 [2024-10-07 13:36:31.433767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.164 [2024-10-07 13:36:31.433786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.164 [2024-10-07 13:36:31.433804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.433819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.433831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.433856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.433872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.444304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.444338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.444449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.444477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.165 [2024-10-07 13:36:31.444494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.444599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.444625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.165 [2024-10-07 13:36:31.444641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.447327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.447359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.449296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.449323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.449338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.449355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.449369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.449383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.450201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.450228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.454707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.454740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.455077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.455108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.165 [2024-10-07 13:36:31.455126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.455238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.455265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.165 [2024-10-07 13:36:31.455281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.455399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.455426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.455465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.455484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.455498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.455515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.455531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.455544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.455568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.455585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.464890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.464922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.465035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.465063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.165 [2024-10-07 13:36:31.465081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.465157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.465183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.165 [2024-10-07 13:36:31.465199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.465454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.465483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.465629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.465652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.465676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.465697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.465712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.465725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.465751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.465768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.479167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.479199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.165 [2024-10-07 13:36:31.479866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.479898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.165 [2024-10-07 13:36:31.479915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.480030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.165 [2024-10-07 13:36:31.480055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.165 [2024-10-07 13:36:31.480070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.165 [2024-10-07 13:36:31.480457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.480487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.165 [2024-10-07 13:36:31.480560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.480596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.480611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.480629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.165 [2024-10-07 13:36:31.480644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.165 [2024-10-07 13:36:31.480657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.165 [2024-10-07 13:36:31.480851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.165 [2024-10-07 13:36:31.480875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.493572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.493605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.493755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.493784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.166 [2024-10-07 13:36:31.493801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.493881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.493908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.166 [2024-10-07 13:36:31.493924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.493950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.493971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.493993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.494008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.494027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.494044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.494058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.494072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.494097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.494113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.503701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.503733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.503861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.503889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.166 [2024-10-07 13:36:31.503906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.504038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.504064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.166 [2024-10-07 13:36:31.504080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.506966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.506999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.509920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.509947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.509962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.509980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.509995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.510007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.510885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.510912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.513814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.513860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.513994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.514022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.166 [2024-10-07 13:36:31.514038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.514243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.166 [2024-10-07 13:36:31.514270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.166 [2024-10-07 13:36:31.514297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.166 [2024-10-07 13:36:31.514318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.514500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.166 [2024-10-07 13:36:31.514525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.514539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.514553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.514602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.514623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.166 [2024-10-07 13:36:31.514636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.166 [2024-10-07 13:36:31.514650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.166 [2024-10-07 13:36:31.514682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.166 [2024-10-07 13:36:31.524148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.166 [2024-10-07 13:36:31.524181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.526403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.526436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.167 [2024-10-07 13:36:31.526454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.526595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.526621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.167 [2024-10-07 13:36:31.526636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.527496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.527526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.527968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.527995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.528009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.528043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.528058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.528072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.528319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.528343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.534301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.534339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.534535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.534563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.167 [2024-10-07 13:36:31.534581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.534724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.534751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.167 [2024-10-07 13:36:31.534768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.535124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.535153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.535177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.535192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.535205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.535222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.535235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.535250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.535274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.535290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.544683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.544716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.544826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.544854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.167 [2024-10-07 13:36:31.544871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.544946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.544973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.167 [2024-10-07 13:36:31.544989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.545175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.545218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.167 [2024-10-07 13:36:31.545279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.545300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.545314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.545336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.167 [2024-10-07 13:36:31.545352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.167 [2024-10-07 13:36:31.545365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.167 [2024-10-07 13:36:31.545548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.545571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.167 [2024-10-07 13:36:31.557373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.557406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.167 [2024-10-07 13:36:31.557518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.557547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.167 [2024-10-07 13:36:31.557564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.167 [2024-10-07 13:36:31.557674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.167 [2024-10-07 13:36:31.557701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.167 [2024-10-07 13:36:31.557717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.557742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.557765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.557786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.557801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.557814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.557831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.557845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.557858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.557883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.557899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.571090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.571125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.572594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.572626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.168 [2024-10-07 13:36:31.572644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.572790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.572817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.168 [2024-10-07 13:36:31.572838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.573286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.573317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.573396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.573432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.573446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.573464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.573478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.573492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.573517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.573534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.581222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.581285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.581446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.581474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.168 [2024-10-07 13:36:31.581491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.581582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.581608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.168 [2024-10-07 13:36:31.581624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.581643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.581678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.581699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.581712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.581725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.581751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.581769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.581781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.581794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.581816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.168 [2024-10-07 13:36:31.592071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.592104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.168 [2024-10-07 13:36:31.592227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.592257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.168 [2024-10-07 13:36:31.592273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.592371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.168 [2024-10-07 13:36:31.592411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.168 [2024-10-07 13:36:31.592428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.168 [2024-10-07 13:36:31.592454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.592475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.168 [2024-10-07 13:36:31.592497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.168 [2024-10-07 13:36:31.592512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.168 [2024-10-07 13:36:31.592527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.168 [2024-10-07 13:36:31.592543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.592558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.592572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.592612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.592629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.603841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.603875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.604017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.604046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.169 [2024-10-07 13:36:31.604063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.604141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.604167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.169 [2024-10-07 13:36:31.604183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.604209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.604230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.604252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.604267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.604280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.604298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.604318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.604332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.604358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.604375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.618973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.619008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.620386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.620418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.169 [2024-10-07 13:36:31.620436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.620515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.620540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.169 [2024-10-07 13:36:31.620556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.621141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.621173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.621426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.621452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.621467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.621485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.621500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.621514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.621726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.621750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 8441.10 IOPS, 32.97 MiB/s [2024-10-07T11:36:37.881Z] [2024-10-07 13:36:31.629089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.629153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.169 [2024-10-07 13:36:31.629251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.629279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.169 [2024-10-07 13:36:31.629296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.632016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.169 [2024-10-07 13:36:31.632049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.169 [2024-10-07 13:36:31.632067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.169 [2024-10-07 13:36:31.632092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.634981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.169 [2024-10-07 13:36:31.635025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.635039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.635052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.636156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.169 [2024-10-07 13:36:31.636198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.169 [2024-10-07 13:36:31.636212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.169 [2024-10-07 13:36:31.636226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.169 [2024-10-07 13:36:31.636835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.170 [2024-10-07 13:36:31.639410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.639457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.639590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.639618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.170 [2024-10-07 13:36:31.639634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.639787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.639813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.170 [2024-10-07 13:36:31.639829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.639848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.639874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.639892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.639906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.639920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.170 [2024-10-07 13:36:31.639945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.170 [2024-10-07 13:36:31.639962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.639975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.639988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.170 [2024-10-07 13:36:31.640012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.170 [2024-10-07 13:36:31.651786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.651820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.651968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.651998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.170 [2024-10-07 13:36:31.652016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.652126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.652152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.170 [2024-10-07 13:36:31.652168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.652194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.652216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.652237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.652253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.652266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.170 [2024-10-07 13:36:31.652283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.652297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.652310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.170 [2024-10-07 13:36:31.652351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.170 [2024-10-07 13:36:31.652367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.170 [2024-10-07 13:36:31.662978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.663012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.170 [2024-10-07 13:36:31.664941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.664974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.170 [2024-10-07 13:36:31.664993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.665132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.170 [2024-10-07 13:36:31.665158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.170 [2024-10-07 13:36:31.665174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.170 [2024-10-07 13:36:31.667379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.667412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.170 [2024-10-07 13:36:31.668360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.668386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.668399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.170 [2024-10-07 13:36:31.668417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.170 [2024-10-07 13:36:31.668437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.170 [2024-10-07 13:36:31.668451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.668730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.668754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.673256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.673302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.673484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.673512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.171 [2024-10-07 13:36:31.673530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.673636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.673662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.171 [2024-10-07 13:36:31.673688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.673714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.673736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.673756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.673771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.673785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.673802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.673816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.673828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.673854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.673870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.683429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.683462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.683605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.683634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.171 [2024-10-07 13:36:31.683650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.683737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.683764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.171 [2024-10-07 13:36:31.683780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.683965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.684015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.684080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.684101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.684115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.684133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.684147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.684161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.684186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.684203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.171 [2024-10-07 13:36:31.695844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.695878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.171 [2024-10-07 13:36:31.696195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.696228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.171 [2024-10-07 13:36:31.696246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.696392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.171 [2024-10-07 13:36:31.696418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.171 [2024-10-07 13:36:31.696435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.171 [2024-10-07 13:36:31.696909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.696942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.171 [2024-10-07 13:36:31.697176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.697202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.697217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.697235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.171 [2024-10-07 13:36:31.697250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.171 [2024-10-07 13:36:31.697263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.171 [2024-10-07 13:36:31.697329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.697349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.707305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.707338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.707582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.707616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.172 [2024-10-07 13:36:31.707635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.707725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.707751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.172 [2024-10-07 13:36:31.707768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.709824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.709856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.710569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.710595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.710609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.710627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.710641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.710654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.711157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.711182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.717420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.717465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.717644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.717679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.172 [2024-10-07 13:36:31.717698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.717814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.717841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.172 [2024-10-07 13:36:31.717856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.717875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.720092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.720122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.720136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.720149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.720342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.720366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.720385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.720399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.720511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.727840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.727873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.728037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.728065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.172 [2024-10-07 13:36:31.728082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.728164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.728190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.172 [2024-10-07 13:36:31.728206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.728403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.728446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.172 [2024-10-07 13:36:31.728525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.728547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.728562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.728579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.172 [2024-10-07 13:36:31.728594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.172 [2024-10-07 13:36:31.728607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.172 [2024-10-07 13:36:31.728632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.728649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.172 [2024-10-07 13:36:31.740779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.740813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.172 [2024-10-07 13:36:31.741114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.741146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.172 [2024-10-07 13:36:31.741164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.172 [2024-10-07 13:36:31.741382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.172 [2024-10-07 13:36:31.741408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.173 [2024-10-07 13:36:31.741424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.173 [2024-10-07 13:36:31.741628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.173 [2024-10-07 13:36:31.741674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.173 [2024-10-07 13:36:31.741727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.173 [2024-10-07 13:36:31.741748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.173 [2024-10-07 13:36:31.741762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.173 [2024-10-07 13:36:31.741780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.173 [2024-10-07 13:36:31.741795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.173 [2024-10-07 13:36:31.741807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.173 [2024-10-07 13:36:31.742004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.173 [2024-10-07 13:36:31.742027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.173 [2024-10-07 13:36:31.754192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.173 [2024-10-07 13:36:31.754225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.173 [2024-10-07 13:36:31.754614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.173 [2024-10-07 13:36:31.754646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.173 [2024-10-07 13:36:31.754671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.173 [2024-10-07 13:36:31.754792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.173 [2024-10-07 13:36:31.754818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.173 [2024-10-07 13:36:31.754834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.173 [2024-10-07 13:36:31.755556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.173 [2024-10-07 13:36:31.755586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.173 [2024-10-07 13:36:31.755856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.173 [2024-10-07 13:36:31.755879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.173 [2024-10-07 13:36:31.755893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.173 [2024-10-07 13:36:31.755912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.173 [2024-10-07 13:36:31.755927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.173 [2024-10-07 13:36:31.755941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.173 [2024-10-07 13:36:31.756144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.173 [2024-10-07 13:36:31.756167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.173 [2024-10-07 13:36:31.764339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.173 [2024-10-07 13:36:31.764387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.173 [2024-10-07 13:36:31.764516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.173 [2024-10-07 13:36:31.764561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.173 [2024-10-07 13:36:31.764584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.173 [2024-10-07 13:36:31.764713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.173 [2024-10-07 13:36:31.764742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.173 [2024-10-07 13:36:31.764759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.173 [2024-10-07 13:36:31.764779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.173 [2024-10-07 13:36:31.767530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.767560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.767575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.767588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.768684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.768711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.768726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.768740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.769104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.774424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.774599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.774628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.174 [2024-10-07 13:36:31.774644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.774690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.774740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.774773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.774790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.774803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.774827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.774991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.775018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.174 [2024-10-07 13:36:31.775034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.775059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.775083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.775099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.775119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.775252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.784632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.784785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.784814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.174 [2024-10-07 13:36:31.784831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.784856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.784908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.784930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.784944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.784972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.784992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.785202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.785229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.174 [2024-10-07 13:36:31.785246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.785271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.785295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.174 [2024-10-07 13:36:31.785310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.174 [2024-10-07 13:36:31.785324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.174 [2024-10-07 13:36:31.785348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.174 [2024-10-07 13:36:31.795784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.795818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.174 [2024-10-07 13:36:31.796128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.796159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.174 [2024-10-07 13:36:31.796177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.796313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.174 [2024-10-07 13:36:31.796340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.174 [2024-10-07 13:36:31.796357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.174 [2024-10-07 13:36:31.796466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.796492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.174 [2024-10-07 13:36:31.797528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.797554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.797568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.797585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.797599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.797611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.799502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.799529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.807688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.807721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.807961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.175 [2024-10-07 13:36:31.807990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.175 [2024-10-07 13:36:31.808007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.175 [2024-10-07 13:36:31.808087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.175 [2024-10-07 13:36:31.808113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.175 [2024-10-07 13:36:31.808129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.175 [2024-10-07 13:36:31.808266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.175 [2024-10-07 13:36:31.808295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.175 [2024-10-07 13:36:31.808410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.808432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.808447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.808464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.808479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.808492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.808599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.808634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.817802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.817851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.817983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.175 [2024-10-07 13:36:31.818011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.175 [2024-10-07 13:36:31.818029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.175 [2024-10-07 13:36:31.818122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.175 [2024-10-07 13:36:31.818149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.175 [2024-10-07 13:36:31.818165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.175 [2024-10-07 13:36:31.818184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.175 [2024-10-07 13:36:31.818210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.175 [2024-10-07 13:36:31.818229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.818242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.818255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.818280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.818298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.175 [2024-10-07 13:36:31.818311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.175 [2024-10-07 13:36:31.818324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.175 [2024-10-07 13:36:31.818346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.175 [2024-10-07 13:36:31.829167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.829202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.175 [2024-10-07 13:36:31.829391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.175 [2024-10-07 13:36:31.829420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.175 [2024-10-07 13:36:31.829437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.829545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.176 [2024-10-07 13:36:31.829571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.176 [2024-10-07 13:36:31.829587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.829783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.829827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.829891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.829912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.829926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.829943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.829959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.829972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.830153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.830182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.841704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.176 [2024-10-07 13:36:31.841738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.176 [2024-10-07 13:36:31.841960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.176 [2024-10-07 13:36:31.841989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.176 [2024-10-07 13:36:31.842006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.842088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.176 [2024-10-07 13:36:31.842113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.176 [2024-10-07 13:36:31.842129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.843595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.843628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.844256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.844281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.844295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.844311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.844324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.844336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.844603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.844627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.851821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.176 [2024-10-07 13:36:31.851868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.176 [2024-10-07 13:36:31.852012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.176 [2024-10-07 13:36:31.852042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.176 [2024-10-07 13:36:31.852059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.852173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.176 [2024-10-07 13:36:31.852200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.176 [2024-10-07 13:36:31.852217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.176 [2024-10-07 13:36:31.852235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.852261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.176 [2024-10-07 13:36:31.852279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.852298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.852311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.852337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.852354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.176 [2024-10-07 13:36:31.852367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.176 [2024-10-07 13:36:31.852380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.176 [2024-10-07 13:36:31.852402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.176 [2024-10-07 13:36:31.861909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.176 [2024-10-07 13:36:31.862022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.862050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.177 [2024-10-07 13:36:31.862069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.862267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.862362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.862397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.862414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.862429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.862453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.862568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.862595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.177 [2024-10-07 13:36:31.862612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.862637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.862661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.862688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.862702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.862726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.875965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.876000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.876342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.876373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.177 [2024-10-07 13:36:31.876390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.876503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.876534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.177 [2024-10-07 13:36:31.876551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.876768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.876799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.876848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.876867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.876882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.876899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.876913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.876926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.877108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.877134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.890841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.890874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.891009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.891039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.177 [2024-10-07 13:36:31.891056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.891139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.891166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.177 [2024-10-07 13:36:31.891183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.177 [2024-10-07 13:36:31.891208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.891228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.177 [2024-10-07 13:36:31.891250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.891266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.891280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.891298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.177 [2024-10-07 13:36:31.891312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.177 [2024-10-07 13:36:31.891325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.177 [2024-10-07 13:36:31.891350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.891366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.177 [2024-10-07 13:36:31.901479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.901512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.177 [2024-10-07 13:36:31.904362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.177 [2024-10-07 13:36:31.904394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.178 [2024-10-07 13:36:31.904412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.904495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.904520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.178 [2024-10-07 13:36:31.904540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.906104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.906137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.906194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.906214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.906228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.906245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.906260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.906273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.906298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.906315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.913573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.913606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.913812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.913844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.178 [2024-10-07 13:36:31.913861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.913999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.914025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.178 [2024-10-07 13:36:31.914042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.914149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.914176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.914294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.914315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.914334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.914352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.914366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.914378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.914481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.914502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.924153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.924187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.924291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.924319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.178 [2024-10-07 13:36:31.924336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.924421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.924447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.178 [2024-10-07 13:36:31.924463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.924488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.924510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.178 [2024-10-07 13:36:31.924531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.924546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.924560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.924577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.178 [2024-10-07 13:36:31.924592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.178 [2024-10-07 13:36:31.924604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.178 [2024-10-07 13:36:31.924629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.924646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.178 [2024-10-07 13:36:31.935273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.935308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.178 [2024-10-07 13:36:31.935472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.935503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.178 [2024-10-07 13:36:31.935521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.935626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.178 [2024-10-07 13:36:31.935654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.178 [2024-10-07 13:36:31.935687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.178 [2024-10-07 13:36:31.935873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.935917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.935981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.936003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.936017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.936034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.936049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.936062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.936244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.936269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.948500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.948534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.948720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.948752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.179 [2024-10-07 13:36:31.948769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.948880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.948908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.179 [2024-10-07 13:36:31.948925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.949769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.949799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.950217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.950242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.950256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.950274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.950288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.950301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.950519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.950544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.959045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.959084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.959301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.959332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.179 [2024-10-07 13:36:31.959349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.959436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.959461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.179 [2024-10-07 13:36:31.959478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.959586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.959614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.959728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.959751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.959765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.959782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.959796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.959809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.963058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.963085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.969500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.969533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.969646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.969684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.179 [2024-10-07 13:36:31.969703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.969820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.179 [2024-10-07 13:36:31.969846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.179 [2024-10-07 13:36:31.969861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.179 [2024-10-07 13:36:31.969886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.969907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.179 [2024-10-07 13:36:31.969928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.969943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.969956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.969979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.179 [2024-10-07 13:36:31.970000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.179 [2024-10-07 13:36:31.970013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.179 [2024-10-07 13:36:31.970038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.970054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.179 [2024-10-07 13:36:31.979821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.979853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.179 [2024-10-07 13:36:31.980145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:31.980176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.180 [2024-10-07 13:36:31.980194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:31.980303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:31.980329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.180 [2024-10-07 13:36:31.980346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:31.980551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:31.980579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:31.980627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:31.980647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:31.980661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:31.980688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:31.980703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:31.980717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:31.980900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:31.980924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:31.994279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:31.994313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:31.994694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:31.994734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.180 [2024-10-07 13:36:31.994751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:31.994856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:31.994883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.180 [2024-10-07 13:36:31.994904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:31.995329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:31.995374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:31.995605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:31.995630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:31.995645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:31.995664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:31.995691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:31.995704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:31.995763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:31.995783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:32.004626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:32.004684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:32.004946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:32.004978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.180 [2024-10-07 13:36:32.004996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:32.005075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:32.005102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.180 [2024-10-07 13:36:32.005118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:32.006943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:32.006975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:32.008988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:32.009014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:32.009029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:32.009047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:32.009061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:32.009075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:32.009381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:32.009407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.180 [2024-10-07 13:36:32.014901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:32.014932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.180 [2024-10-07 13:36:32.015087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:32.015116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.180 [2024-10-07 13:36:32.015133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:32.015233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.180 [2024-10-07 13:36:32.015258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.180 [2024-10-07 13:36:32.015274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.180 [2024-10-07 13:36:32.015299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:32.015321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.180 [2024-10-07 13:36:32.015343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:32.015357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.180 [2024-10-07 13:36:32.015371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.180 [2024-10-07 13:36:32.015388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.180 [2024-10-07 13:36:32.015403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.015416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.015441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.015472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.025028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.025061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.025169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.025197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.181 [2024-10-07 13:36:32.025214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.025324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.025350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.181 [2024-10-07 13:36:32.025366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.025391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.025412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.025433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.181 [2024-10-07 13:36:32.025448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.025461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.025478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.181 [2024-10-07 13:36:32.025498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.025512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.025536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.025553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.039001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.039036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.039599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.039632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.181 [2024-10-07 13:36:32.039649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.039747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.039773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.181 [2024-10-07 13:36:32.039790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.040009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.040038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.040238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.181 [2024-10-07 13:36:32.040261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.040275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.040294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.181 [2024-10-07 13:36:32.040308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.040322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.040553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.040577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.181 [2024-10-07 13:36:32.049311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.049344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.181 [2024-10-07 13:36:32.049683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.049716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.181 [2024-10-07 13:36:32.049734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.049841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.181 [2024-10-07 13:36:32.049867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.181 [2024-10-07 13:36:32.049883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.181 [2024-10-07 13:36:32.054649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.054691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.181 [2024-10-07 13:36:32.055487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.181 [2024-10-07 13:36:32.055512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.181 [2024-10-07 13:36:32.055525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.181 [2024-10-07 13:36:32.055557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.055572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.055585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.056079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.056103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.059747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.059778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.060092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.060124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.182 [2024-10-07 13:36:32.060142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.060252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.060278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.182 [2024-10-07 13:36:32.060294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.060343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.060368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.060390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.060406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.060419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.060436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.060451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.060464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.060488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.060519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.070443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.070477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.070645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.070687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.182 [2024-10-07 13:36:32.070707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.070791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.070816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.182 [2024-10-07 13:36:32.070832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.071032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.071061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.071123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.071142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.071170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.071189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.071204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.071217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.071400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.071423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.084381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.084414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.084973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.085005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.182 [2024-10-07 13:36:32.085023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.085103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.085129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.182 [2024-10-07 13:36:32.085145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.085361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.085390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.085438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.085459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.085473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.085490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.085505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.085528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.085776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.085800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.095537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.095570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.182 [2024-10-07 13:36:32.095800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.095830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.182 [2024-10-07 13:36:32.095847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.095954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.182 [2024-10-07 13:36:32.095981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.182 [2024-10-07 13:36:32.095997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.182 [2024-10-07 13:36:32.096104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.096132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.182 [2024-10-07 13:36:32.098552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.098579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.098594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.098611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.182 [2024-10-07 13:36:32.098626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.182 [2024-10-07 13:36:32.098640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.182 [2024-10-07 13:36:32.099131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.099170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.182 [2024-10-07 13:36:32.105674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.105721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.105850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.105879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.183 [2024-10-07 13:36:32.105895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.106042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.106078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.183 [2024-10-07 13:36:32.106094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.106112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.106156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.106175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.106188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.106201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.106226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.106243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.106256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.106269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.106292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.115760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.115936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.115965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.183 [2024-10-07 13:36:32.115982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.116021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.116054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.116083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.116100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.116114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.116137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.116235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.116261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.183 [2024-10-07 13:36:32.116276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.116301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.116325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.116339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.116352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.116375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.126651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.126691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.126843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.126871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.183 [2024-10-07 13:36:32.126893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.126982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.127007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.183 [2024-10-07 13:36:32.127023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.127171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.127200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.127345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.127367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.127381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.127399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.127414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.127427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.127547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.127569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.137270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.137301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.137470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.137499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.183 [2024-10-07 13:36:32.137517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.137657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.137691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.183 [2024-10-07 13:36:32.137709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.139797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.139829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.183 [2024-10-07 13:36:32.141336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.141363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.141378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.141396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.183 [2024-10-07 13:36:32.141410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.183 [2024-10-07 13:36:32.141429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.183 [2024-10-07 13:36:32.141651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.141685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.183 [2024-10-07 13:36:32.147395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.147441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.183 [2024-10-07 13:36:32.149523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.149555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.183 [2024-10-07 13:36:32.149573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.183 [2024-10-07 13:36:32.149690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.183 [2024-10-07 13:36:32.149717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.184 [2024-10-07 13:36:32.149733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.154106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.154141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.154495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.154537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.154553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.154572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.154588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.154601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.154751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.154774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.157520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.157564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.157786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.157815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.184 [2024-10-07 13:36:32.157833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.157955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.157981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.184 [2024-10-07 13:36:32.157998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.158017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.158043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.158067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.158081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.158094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.158119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.158137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.158165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.158179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.158233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.169108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.169141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.170284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.170316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.184 [2024-10-07 13:36:32.170333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.170452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.170477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.184 [2024-10-07 13:36:32.170493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.171081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.171127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.171367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.171393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.171408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.171426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.171441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.171454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.171658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.171691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.179594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.179627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.179846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.179876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.184 [2024-10-07 13:36:32.179899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.179991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.180019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.184 [2024-10-07 13:36:32.180035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.180144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.180172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.180305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.180327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.180340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.180357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.180372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.184 [2024-10-07 13:36:32.180399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.184 [2024-10-07 13:36:32.180513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.180534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.184 [2024-10-07 13:36:32.190060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.190094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.184 [2024-10-07 13:36:32.190268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.190297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.184 [2024-10-07 13:36:32.190315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.190423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.184 [2024-10-07 13:36:32.190450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.184 [2024-10-07 13:36:32.190467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.184 [2024-10-07 13:36:32.190494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.190515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.184 [2024-10-07 13:36:32.190536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.184 [2024-10-07 13:36:32.190551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.190564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.190581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.190596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.190609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.190640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.190657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.200174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.200222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.200380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.200408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.185 [2024-10-07 13:36:32.200425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.201109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.201140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.185 [2024-10-07 13:36:32.201156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.201176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.202334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.202362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.202377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.202390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.202806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.202832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.202846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.202860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.202939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.210261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.210380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.210410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.185 [2024-10-07 13:36:32.210428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.210467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.210498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.210527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.210543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.210557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.210581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.210725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.210758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.185 [2024-10-07 13:36:32.210775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.210802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.210826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.210840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.210854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.210878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.222528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.222562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.222782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.222812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.185 [2024-10-07 13:36:32.222831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.222912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.222938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.185 [2024-10-07 13:36:32.222953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.223021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.223065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.223117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.223138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.223151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.223169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.223184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.223198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.223223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.223240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.232656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.232715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.185 [2024-10-07 13:36:32.232820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.232849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.185 [2024-10-07 13:36:32.232866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.235536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.185 [2024-10-07 13:36:32.235568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.185 [2024-10-07 13:36:32.235586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.185 [2024-10-07 13:36:32.235605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.236592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.185 [2024-10-07 13:36:32.236620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.236634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.236661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.236913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.185 [2024-10-07 13:36:32.236937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.185 [2024-10-07 13:36:32.236951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.185 [2024-10-07 13:36:32.236965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.185 [2024-10-07 13:36:32.237725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.242878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.242911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.243014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.243042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.186 [2024-10-07 13:36:32.243059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.243136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.243162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.186 [2024-10-07 13:36:32.243177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.243202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.243224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.243245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.243260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.243274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.243291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.243306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.243319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.243344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.243366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.253005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.253037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.253272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.253301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.186 [2024-10-07 13:36:32.253318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.253399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.253425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.186 [2024-10-07 13:36:32.253442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.253467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.253488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.253510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.253525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.253538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.253556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.253571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.253584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.253609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.253626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.265140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.265174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.265520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.265552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.186 [2024-10-07 13:36:32.265570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.265681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.265708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.186 [2024-10-07 13:36:32.265726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.186 [2024-10-07 13:36:32.265930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.265959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.186 [2024-10-07 13:36:32.266007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.266026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.266047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.266065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.186 [2024-10-07 13:36:32.266079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.186 [2024-10-07 13:36:32.266093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.186 [2024-10-07 13:36:32.266289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.266313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.186 [2024-10-07 13:36:32.280712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.280746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.186 [2024-10-07 13:36:32.281104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.186 [2024-10-07 13:36:32.281137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.187 [2024-10-07 13:36:32.281155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.281263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.281289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.187 [2024-10-07 13:36:32.281305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.281510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.281539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.281588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.281607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.281621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.281638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.281653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.281674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.281860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.281883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.295317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.295351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.295908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.295940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.187 [2024-10-07 13:36:32.295958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.296041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.296071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.187 [2024-10-07 13:36:32.296088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.296371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.296400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.296463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.296497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.296513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.296530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.296545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.296558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.296769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.296794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.309841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.309876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.310486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.187 [2024-10-07 13:36:32.310504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.310645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.310678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.187 [2024-10-07 13:36:32.310697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.311072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.311119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.311348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.311374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.311389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.311407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.311422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.311435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.311500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.311521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.321589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.321622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.321889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.321919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.187 [2024-10-07 13:36:32.321936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.322018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.322044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.187 [2024-10-07 13:36:32.322060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.322201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.322230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.322379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.322402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.322416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.322434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.322448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.187 [2024-10-07 13:36:32.322461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.187 [2024-10-07 13:36:32.326207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.326236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.187 [2024-10-07 13:36:32.331703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.331750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.187 [2024-10-07 13:36:32.331916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.331944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.187 [2024-10-07 13:36:32.331961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.332084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.187 [2024-10-07 13:36:32.332110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.187 [2024-10-07 13:36:32.332126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.187 [2024-10-07 13:36:32.332145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.333254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.187 [2024-10-07 13:36:32.333282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.187 [2024-10-07 13:36:32.333295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.333313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.333514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.333539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.333552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.333565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.333680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.341896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.341929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.342068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.342096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.188 [2024-10-07 13:36:32.342113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.342249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.342275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.188 [2024-10-07 13:36:32.342291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.342317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.342338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.342359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.342375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.342389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.342406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.342420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.342432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.342456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.342473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.354764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.354813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.355940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.355972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.188 [2024-10-07 13:36:32.355990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.356069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.356094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.188 [2024-10-07 13:36:32.356115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.356611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.356642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.356894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.356918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.356933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.356951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.356966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.356978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.357030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.357052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.364911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.366940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.367053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.367082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.188 [2024-10-07 13:36:32.367099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.368079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.368124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.188 [2024-10-07 13:36:32.368141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.368161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.368519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.368546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.368575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.368588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.369826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.369852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.369865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.369879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.370012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.375011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.375212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.375242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.188 [2024-10-07 13:36:32.375260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.375285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.375309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.375324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.375339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.375364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.377404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.377555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.377584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.188 [2024-10-07 13:36:32.377601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.377626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.377650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.377674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.377689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.377722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.385104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.188 [2024-10-07 13:36:32.385278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.188 [2024-10-07 13:36:32.385308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.188 [2024-10-07 13:36:32.385327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.188 [2024-10-07 13:36:32.385353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.188 [2024-10-07 13:36:32.385377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.188 [2024-10-07 13:36:32.385392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.188 [2024-10-07 13:36:32.385406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.188 [2024-10-07 13:36:32.385431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.188 [2024-10-07 13:36:32.387485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.387631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.387660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.387687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.387719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.387743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.387758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.387772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.387796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.398707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.398788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.398927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.398957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.189 [2024-10-07 13:36:32.398975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.399096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.399123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.399139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.399158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.399184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.399202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.399216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.399229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.399255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.399272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.399285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.399298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.399335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.412620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.412653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.412774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.412804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.189 [2024-10-07 13:36:32.412822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.412933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.412960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.412976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.413007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.413030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.413051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.413066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.413079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.413096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.413111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.413123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.413148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.413164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.427827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.427861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.428043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.428073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.428091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.428176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.428204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.189 [2024-10-07 13:36:32.428220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.429300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.429345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.430002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.430041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.430055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.430072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.430086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.430099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.430394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.430420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.439041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.439075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.439307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.439338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.189 [2024-10-07 13:36:32.439355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.439466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.439493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.439509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.439627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.439655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.442830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.442856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.442871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.442888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.189 [2024-10-07 13:36:32.442902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.189 [2024-10-07 13:36:32.442915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.189 [2024-10-07 13:36:32.443880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.443905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.189 [2024-10-07 13:36:32.449310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.449340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.189 [2024-10-07 13:36:32.449484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.449513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.189 [2024-10-07 13:36:32.449531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.449643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.189 [2024-10-07 13:36:32.449680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.189 [2024-10-07 13:36:32.449699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.189 [2024-10-07 13:36:32.449725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.189 [2024-10-07 13:36:32.449746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.449768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.449782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.449796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.449812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.449832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.449845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.449871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.449888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.459476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.459509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.459858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.459890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.190 [2024-10-07 13:36:32.459908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.460021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.460048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.190 [2024-10-07 13:36:32.460064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.460269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.460298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.460347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.460367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.460381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.460398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.460413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.460426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.460608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.460632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.473501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.473535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.474055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.474087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.190 [2024-10-07 13:36:32.474104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.474224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.474249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.190 [2024-10-07 13:36:32.474266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.474656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.474703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.474920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.474944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.474958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.474976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.474990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.475004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.475068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.475088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.484153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.484187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.484449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.484480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.190 [2024-10-07 13:36:32.484498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.484577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.484605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.190 [2024-10-07 13:36:32.484622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.484739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.484767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.190 [2024-10-07 13:36:32.484886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.484909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.484923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.484941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.190 [2024-10-07 13:36:32.484955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.190 [2024-10-07 13:36:32.484968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.190 [2024-10-07 13:36:32.485088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.485111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.190 [2024-10-07 13:36:32.494265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.494311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.190 [2024-10-07 13:36:32.494474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.190 [2024-10-07 13:36:32.494510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.190 [2024-10-07 13:36:32.494529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.190 [2024-10-07 13:36:32.494610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.494637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.191 [2024-10-07 13:36:32.494653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.494680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.494927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.494968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.494983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.494995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.495158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.495184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.495198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.495211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.495317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.505404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.505438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.505578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.505608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.191 [2024-10-07 13:36:32.505625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.505728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.505756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.191 [2024-10-07 13:36:32.505773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.505799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.505820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.505857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.505877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.505891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.505908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.505922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.505940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.505982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.505998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.518465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.518497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.519156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.519187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.191 [2024-10-07 13:36:32.519205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.519292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.519319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.191 [2024-10-07 13:36:32.519335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.519570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.519600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.519648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.519675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.519692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.519709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.519724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.519737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.519777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.519797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.533301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.533334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.533832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.533863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.191 [2024-10-07 13:36:32.533881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.534017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.534044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.191 [2024-10-07 13:36:32.534061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.534278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.534314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.534364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.534385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.534399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.534416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.534432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.534445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.534712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.534736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.548943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.548991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.191 [2024-10-07 13:36:32.549351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.549382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.191 [2024-10-07 13:36:32.549399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.549487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.191 [2024-10-07 13:36:32.549512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.191 [2024-10-07 13:36:32.549528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.191 [2024-10-07 13:36:32.549897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.549927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.191 [2024-10-07 13:36:32.550001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.550022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.550036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.550054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.191 [2024-10-07 13:36:32.550068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.191 [2024-10-07 13:36:32.550080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.191 [2024-10-07 13:36:32.550263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.191 [2024-10-07 13:36:32.550301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.564879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.564913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.565273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.565304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.565328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.565409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.565435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.192 [2024-10-07 13:36:32.565451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.565655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.565694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.565744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.565764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.565777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.565795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.565809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.565823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.566006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.566030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.579378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.579411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.579839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.579870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.192 [2024-10-07 13:36:32.579888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.579975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.580000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.580016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.580223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.580252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.580301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.580321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.580335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.580353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.580367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.580380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.580630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.580655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.593728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.593762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.594055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.594086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.594104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.594213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.594240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.192 [2024-10-07 13:36:32.594256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.594756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.594786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.595020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.595044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.595059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.595077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.595091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.595103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.595307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.595331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.605292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.605325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.609773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.609806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.192 [2024-10-07 13:36:32.609824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.609914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.609939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.609955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.610664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.610706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.610980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.611004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.611018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.611036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.611052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.611065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.611267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.611292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.615404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.615448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.615677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.615706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.615723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.615818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.615845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.192 [2024-10-07 13:36:32.615862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.192 [2024-10-07 13:36:32.615881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.615907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.192 [2024-10-07 13:36:32.615926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.615939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.615952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.615976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.615994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.192 [2024-10-07 13:36:32.616006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.192 [2024-10-07 13:36:32.616020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.192 [2024-10-07 13:36:32.616043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.192 [2024-10-07 13:36:32.625483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.192 [2024-10-07 13:36:32.625662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.192 [2024-10-07 13:36:32.625699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.192 [2024-10-07 13:36:32.625717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.625754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.625791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.625821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.625838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.625853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.625877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.626035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.626063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.626080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.626105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 8461.36 IOPS, 33.05 MiB/s [2024-10-07T11:36:37.905Z] [2024-10-07 13:36:32.627713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.627733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.627746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.627770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.637647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.637688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.637886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.637916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.193 [2024-10-07 13:36:32.637933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.638044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.638071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.638087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.638195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.638223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.638341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.638362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.638389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.638406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.638420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.638432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.638538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.638557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.648012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.648046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.648391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.648423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.648440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.648524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.648550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.193 [2024-10-07 13:36:32.648566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.648699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.648729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.648832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.648853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.648867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.648885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.648900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.648928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.649045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.649065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.658190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.658223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.658386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.658416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.193 [2024-10-07 13:36:32.658433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.658543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.658570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.658586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.658922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.658953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.659192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.659222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.659238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.659255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.659269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.659282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.659334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.659355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.672290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.672323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.672636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.672676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.672696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.672833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.672861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.193 [2024-10-07 13:36:32.672877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.673157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.673187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.673420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.673444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.673458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.673476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.673491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.673504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.673719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.673744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.193 [2024-10-07 13:36:32.683282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.683315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.193 [2024-10-07 13:36:32.683607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.683637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.193 [2024-10-07 13:36:32.683655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.683789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.193 [2024-10-07 13:36:32.683817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.193 [2024-10-07 13:36:32.683833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.193 [2024-10-07 13:36:32.683942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.683970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.193 [2024-10-07 13:36:32.684087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.684110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.684123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.684140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.193 [2024-10-07 13:36:32.684154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.193 [2024-10-07 13:36:32.684166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.193 [2024-10-07 13:36:32.687574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.687601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.693396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.693442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.693608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.693636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.693653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.693776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.693803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.693819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.693837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.693863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.693881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.693895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.693908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.693932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.693949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.693962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.693975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.694018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.703564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.703598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.703716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.703745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.703762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.703870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.703897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.703914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.703939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.703960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.703981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.703996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.704010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.704026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.704041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.704053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.704077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.704093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.713717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.713749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.713899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.713926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.713943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.714138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.714166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.714183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.718075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.718107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.718627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.718652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.718680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.718700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.718715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.718727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.718806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.718827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.724954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.724986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.725176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.725207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.725224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.725335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.725362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.725379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.725478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.725506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.725529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.725544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.725558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.725575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.725590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.725602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.725626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.725642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.735076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.735108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.735272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.735302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.735319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.735430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.735463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.735480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.735674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.735705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.735754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.735774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.735787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.735805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.735820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.735833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.736016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.736056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.194 [2024-10-07 13:36:32.749819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.749854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.194 [2024-10-07 13:36:32.750343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.750374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.194 [2024-10-07 13:36:32.750392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.750502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.194 [2024-10-07 13:36:32.750527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.194 [2024-10-07 13:36:32.750543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.194 [2024-10-07 13:36:32.750757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.750787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.194 [2024-10-07 13:36:32.751302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.751326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.194 [2024-10-07 13:36:32.751339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.194 [2024-10-07 13:36:32.751357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.194 [2024-10-07 13:36:32.751371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.751383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.751615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.751640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.760851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.760884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.761101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.761131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.195 [2024-10-07 13:36:32.761149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.761226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.761254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.195 [2024-10-07 13:36:32.761270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.764407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.764438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.765281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.765320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.765334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.765351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.765365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.765377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.765835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.765861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.770966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.771014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.771170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.771199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.195 [2024-10-07 13:36:32.771216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.771329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.771356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.195 [2024-10-07 13:36:32.771372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.771391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.771418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.771436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.771449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.771467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.771493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.771511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.771523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.771551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.771574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.781106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.781155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.781313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.781342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.195 [2024-10-07 13:36:32.781359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.781501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.781528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.195 [2024-10-07 13:36:32.781545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.781563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.781589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.781607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.781621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.781634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.781659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.781690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.781705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.781718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.781978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.795382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.795417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.795692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.795724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.195 [2024-10-07 13:36:32.795741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.795844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.195 [2024-10-07 13:36:32.795872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.195 [2024-10-07 13:36:32.795894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.195 [2024-10-07 13:36:32.796227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.796256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.195 [2024-10-07 13:36:32.796840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.796865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.796879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.796897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.195 [2024-10-07 13:36:32.796911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.195 [2024-10-07 13:36:32.796924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.195 [2024-10-07 13:36:32.797148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.797172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.195 [2024-10-07 13:36:32.809008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.195 [2024-10-07 13:36:32.809042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.809395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.809426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.196 [2024-10-07 13:36:32.809443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.809528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.809555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.196 [2024-10-07 13:36:32.809572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.810055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.810085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.810392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.810417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.810431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.810449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.810480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.810493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.810730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.810755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.823798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.823837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.824741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.824773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.196 [2024-10-07 13:36:32.824791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.824874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.824899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.196 [2024-10-07 13:36:32.824916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.825303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.825348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.825650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.825690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.825707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.825726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.825741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.825754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.825963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.825988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.835989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.836022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.836283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.836314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.196 [2024-10-07 13:36:32.836331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.836408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.836434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.196 [2024-10-07 13:36:32.836450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.836559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.836586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.836714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.836736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.836750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.836772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.836787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.836800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.838154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.838179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.846103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.846148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.846284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.846313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.196 [2024-10-07 13:36:32.846330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.846421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.846447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.196 [2024-10-07 13:36:32.846463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.846482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.846508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.846526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.846539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.846553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.846577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.846595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.846607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.846620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.846641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.856188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.856318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.856349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.196 [2024-10-07 13:36:32.856367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.856405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.856437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.196 [2024-10-07 13:36:32.856466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.196 [2024-10-07 13:36:32.856482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.196 [2024-10-07 13:36:32.856501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.196 [2024-10-07 13:36:32.856526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.196 [2024-10-07 13:36:32.856735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.196 [2024-10-07 13:36:32.856764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.196 [2024-10-07 13:36:32.856780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.196 [2024-10-07 13:36:32.858114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.196 [2024-10-07 13:36:32.858722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.858747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.858761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.859129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.197 [2024-10-07 13:36:32.867930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.867963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.868310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.868341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.197 [2024-10-07 13:36:32.868359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.868472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.868498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.197 [2024-10-07 13:36:32.868514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.868637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.868674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.868782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.868805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.868818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.868837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.868853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.868866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.871306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.197 [2024-10-07 13:36:32.871332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.197 [2024-10-07 13:36:32.878170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.878202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.878699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.878730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.197 [2024-10-07 13:36:32.878747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.878849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.878874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.197 [2024-10-07 13:36:32.878890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.879177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.879203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.879225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.879239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.879252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.879267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.879281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.879293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.879316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.197 [2024-10-07 13:36:32.879332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.197 [2024-10-07 13:36:32.888652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.888708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.197 [2024-10-07 13:36:32.889101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.889133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.197 [2024-10-07 13:36:32.889150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.889257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.197 [2024-10-07 13:36:32.889283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.197 [2024-10-07 13:36:32.889299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.197 [2024-10-07 13:36:32.889509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.889539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.197 [2024-10-07 13:36:32.889750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.197 [2024-10-07 13:36:32.889775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.197 [2024-10-07 13:36:32.889789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.197 [2024-10-07 13:36:32.889807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.889827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.889841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.889907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.889943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.902824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.902857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.902967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.902994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.198 [2024-10-07 13:36:32.903011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.903120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.903147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.198 [2024-10-07 13:36:32.903164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.903189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.903210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.903231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.903245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.903259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.903275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.903290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.903303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.903327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.903344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.917057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.917090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.917257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.917286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.198 [2024-10-07 13:36:32.917304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.917386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.917413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.198 [2024-10-07 13:36:32.917429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.917455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.917482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.917505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.917520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.917533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.917550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.917564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.917577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.917601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.917617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.198 [2024-10-07 13:36:32.927256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.927289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.198 [2024-10-07 13:36:32.927426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.927455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.198 [2024-10-07 13:36:32.927472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.927583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.198 [2024-10-07 13:36:32.927610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.198 [2024-10-07 13:36:32.927626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.198 [2024-10-07 13:36:32.927651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.927682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.198 [2024-10-07 13:36:32.927706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.927721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.198 [2024-10-07 13:36:32.927734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.198 [2024-10-07 13:36:32.927751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.198 [2024-10-07 13:36:32.927766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.927779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.928472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.928496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.937367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.937412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.937606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.937640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.199 [2024-10-07 13:36:32.937658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.939109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.939139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.199 [2024-10-07 13:36:32.939156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.939175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.940166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.940193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.940206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.940218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.941041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.941066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.941080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.941092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.941180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.951311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.951345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.951885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.951916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.199 [2024-10-07 13:36:32.951933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.952037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.952062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.199 [2024-10-07 13:36:32.952078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.952141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.952166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.952295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.952318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.952333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.952351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.952366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.952384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.952592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.952618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.199 [2024-10-07 13:36:32.962088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.962122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.199 [2024-10-07 13:36:32.962437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.962469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.199 [2024-10-07 13:36:32.962486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.962562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.199 [2024-10-07 13:36:32.962589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.199 [2024-10-07 13:36:32.962606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.199 [2024-10-07 13:36:32.962723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.962752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.199 [2024-10-07 13:36:32.964619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.964645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.964659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.199 [2024-10-07 13:36:32.964688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.199 [2024-10-07 13:36:32.964704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.199 [2024-10-07 13:36:32.964716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.200 [2024-10-07 13:36:32.965561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.965586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.973435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.973468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.973634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.200 [2024-10-07 13:36:32.973664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.200 [2024-10-07 13:36:32.973692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.200 [2024-10-07 13:36:32.973774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.200 [2024-10-07 13:36:32.973802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.200 [2024-10-07 13:36:32.973818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.200 [2024-10-07 13:36:32.974516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.200 [2024-10-07 13:36:32.974546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.200 [2024-10-07 13:36:32.974741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.200 [2024-10-07 13:36:32.974766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.200 [2024-10-07 13:36:32.974781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.200 [2024-10-07 13:36:32.974798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.200 [2024-10-07 13:36:32.974813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.200 [2024-10-07 13:36:32.974826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.200 [2024-10-07 13:36:32.974933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.974955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.985035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.985068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.985272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.200 [2024-10-07 13:36:32.985303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.200 [2024-10-07 13:36:32.985320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.200 [2024-10-07 13:36:32.985430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.200 [2024-10-07 13:36:32.985458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.200 [2024-10-07 13:36:32.985475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.200 [2024-10-07 13:36:32.985582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.200 [2024-10-07 13:36:32.985610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.200 [2024-10-07 13:36:32.985771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.200 [2024-10-07 13:36:32.985795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.200 [2024-10-07 13:36:32.985810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.200 [2024-10-07 13:36:32.985827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.200 [2024-10-07 13:36:32.985842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.200 [2024-10-07 13:36:32.985856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.200 [2024-10-07 13:36:32.986882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.986908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.200 [2024-10-07 13:36:32.995153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.995202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.200 [2024-10-07 13:36:32.995384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.200 [2024-10-07 13:36:32.995413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.201 [2024-10-07 13:36:32.995436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.201 [2024-10-07 13:36:32.995529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.201 [2024-10-07 13:36:32.995556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.201 [2024-10-07 13:36:32.995573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.201 [2024-10-07 13:36:32.995592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.201 [2024-10-07 13:36:32.995617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.201 [2024-10-07 13:36:32.995636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.201 [2024-10-07 13:36:32.995664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.201 [2024-10-07 13:36:32.995688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.201 [2024-10-07 13:36:32.995729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.201 [2024-10-07 13:36:32.995747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.201 [2024-10-07 13:36:32.995759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.201 [2024-10-07 13:36:32.995772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.201 [2024-10-07 13:36:32.995810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.201 [2024-10-07 13:36:33.009635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.201 [2024-10-07 13:36:33.009676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.201 [2024-10-07 13:36:33.009816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.201 [2024-10-07 13:36:33.009847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.201 [2024-10-07 13:36:33.009864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.201 [2024-10-07 13:36:33.009941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.201 [2024-10-07 13:36:33.009969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.201 [2024-10-07 13:36:33.009986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.201 [2024-10-07 13:36:33.010011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.201 [2024-10-07 13:36:33.010033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.201 [2024-10-07 13:36:33.010055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.201 [2024-10-07 13:36:33.010070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.201 [2024-10-07 13:36:33.010084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.201 [2024-10-07 13:36:33.010101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.201 [2024-10-07 13:36:33.010116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.201 [2024-10-07 13:36:33.010129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.201 [2024-10-07 13:36:33.010159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.201 [2024-10-07 13:36:33.010196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.201 [2024-10-07 13:36:33.025503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.201 [2024-10-07 13:36:33.025536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.202 [2024-10-07 13:36:33.025646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.202 [2024-10-07 13:36:33.025688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.202 [2024-10-07 13:36:33.025706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.202 [2024-10-07 13:36:33.025797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.202 [2024-10-07 13:36:33.025824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.202 [2024-10-07 13:36:33.025841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.202 [2024-10-07 13:36:33.025866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.202 [2024-10-07 13:36:33.025888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.202 [2024-10-07 13:36:33.025909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.202 [2024-10-07 13:36:33.025924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.202 [2024-10-07 13:36:33.025938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.202 [2024-10-07 13:36:33.025955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.202 [2024-10-07 13:36:33.025970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.202 [2024-10-07 13:36:33.025983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.202 [2024-10-07 13:36:33.026018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.202 [2024-10-07 13:36:33.026049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.202 [2024-10-07 13:36:33.037326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.202 [2024-10-07 13:36:33.037361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.202 [2024-10-07 13:36:33.040411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.202 [2024-10-07 13:36:33.040445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.202 [2024-10-07 13:36:33.040463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.202 [2024-10-07 13:36:33.040549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.202 [2024-10-07 13:36:33.040574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.202 [2024-10-07 13:36:33.040590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.202 [2024-10-07 13:36:33.041606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.202 [2024-10-07 13:36:33.041637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.202 [2024-10-07 13:36:33.042146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.202 [2024-10-07 13:36:33.042171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.202 [2024-10-07 13:36:33.042199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.202 [2024-10-07 13:36:33.042219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.202 [2024-10-07 13:36:33.042233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.202 [2024-10-07 13:36:33.042245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.202 [2024-10-07 13:36:33.042510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.203 [2024-10-07 13:36:33.042535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.203 [2024-10-07 13:36:33.047471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.203 [2024-10-07 13:36:33.047501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.203 [2024-10-07 13:36:33.047687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.203 [2024-10-07 13:36:33.047718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.203 [2024-10-07 13:36:33.047735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.203 [2024-10-07 13:36:33.047856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.203 [2024-10-07 13:36:33.047883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.203 [2024-10-07 13:36:33.047899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.203 [2024-10-07 13:36:33.047925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.203 [2024-10-07 13:36:33.047946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.203 [2024-10-07 13:36:33.047967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.203 [2024-10-07 13:36:33.047981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.203 [2024-10-07 13:36:33.047994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.203 [2024-10-07 13:36:33.048011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.203 [2024-10-07 13:36:33.048026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.203 [2024-10-07 13:36:33.048041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.203 [2024-10-07 13:36:33.048066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.203 [2024-10-07 13:36:33.048082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.203 [2024-10-07 13:36:33.057957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.203 [2024-10-07 13:36:33.057999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.203 [2024-10-07 13:36:33.058104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.203 [2024-10-07 13:36:33.058134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.203 [2024-10-07 13:36:33.058152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.203 [2024-10-07 13:36:33.058313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.203 [2024-10-07 13:36:33.058340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.203 [2024-10-07 13:36:33.058357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.203 [2024-10-07 13:36:33.058559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.203 [2024-10-07 13:36:33.058589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.203 [2024-10-07 13:36:33.058721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.203 [2024-10-07 13:36:33.058745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.203 [2024-10-07 13:36:33.058760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.203 [2024-10-07 13:36:33.058778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.203 [2024-10-07 13:36:33.058793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.204 [2024-10-07 13:36:33.058806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.204 [2024-10-07 13:36:33.058967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.204 [2024-10-07 13:36:33.059006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.204 [2024-10-07 13:36:33.072131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.204 [2024-10-07 13:36:33.072164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.204 [2024-10-07 13:36:33.072278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.204 [2024-10-07 13:36:33.072308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.204 [2024-10-07 13:36:33.072325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.204 [2024-10-07 13:36:33.072459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.204 [2024-10-07 13:36:33.072485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.204 [2024-10-07 13:36:33.072501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.204 [2024-10-07 13:36:33.072544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.204 [2024-10-07 13:36:33.072576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.204 [2024-10-07 13:36:33.072598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.204 [2024-10-07 13:36:33.072623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.204 [2024-10-07 13:36:33.072636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.204 [2024-10-07 13:36:33.072653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.204 [2024-10-07 13:36:33.072687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.204 [2024-10-07 13:36:33.072702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.204 [2024-10-07 13:36:33.072744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.204 [2024-10-07 13:36:33.072765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.204 [2024-10-07 13:36:33.082448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.205 [2024-10-07 13:36:33.082480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.205 [2024-10-07 13:36:33.082735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.205 [2024-10-07 13:36:33.082766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.205 [2024-10-07 13:36:33.082784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.205 [2024-10-07 13:36:33.082867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.205 [2024-10-07 13:36:33.082895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.205 [2024-10-07 13:36:33.082911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.205 [2024-10-07 13:36:33.086206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.205 [2024-10-07 13:36:33.086239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.205 [2024-10-07 13:36:33.087144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.205 [2024-10-07 13:36:33.087169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.205 [2024-10-07 13:36:33.087190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.205 [2024-10-07 13:36:33.087208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.205 [2024-10-07 13:36:33.087222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.205 [2024-10-07 13:36:33.087235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.205 [2024-10-07 13:36:33.087361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.205 [2024-10-07 13:36:33.087382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.205 [2024-10-07 13:36:33.092556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.205 [2024-10-07 13:36:33.092601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.205 [2024-10-07 13:36:33.092816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.205 [2024-10-07 13:36:33.092846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.205 [2024-10-07 13:36:33.092863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.205 [2024-10-07 13:36:33.093016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.205 [2024-10-07 13:36:33.093044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.205 [2024-10-07 13:36:33.093067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.205 [2024-10-07 13:36:33.093086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.205 [2024-10-07 13:36:33.093112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.205 [2024-10-07 13:36:33.093131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.205 [2024-10-07 13:36:33.093150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.205 [2024-10-07 13:36:33.093164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.205 [2024-10-07 13:36:33.093190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.205 [2024-10-07 13:36:33.093207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.206 [2024-10-07 13:36:33.093219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.206 [2024-10-07 13:36:33.093248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.206 [2024-10-07 13:36:33.093272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.206 [2024-10-07 13:36:33.102640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.206 [2024-10-07 13:36:33.102838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.206 [2024-10-07 13:36:33.102869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.206 [2024-10-07 13:36:33.102887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.206 [2024-10-07 13:36:33.102924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.206 [2024-10-07 13:36:33.102956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.206 [2024-10-07 13:36:33.102989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.206 [2024-10-07 13:36:33.103006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.206 [2024-10-07 13:36:33.103020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.206 [2024-10-07 13:36:33.103044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.206 [2024-10-07 13:36:33.103193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.206 [2024-10-07 13:36:33.103221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.206 [2024-10-07 13:36:33.103238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.206 [2024-10-07 13:36:33.103263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.206 [2024-10-07 13:36:33.103287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.206 [2024-10-07 13:36:33.103303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.206 [2024-10-07 13:36:33.103316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.206 [2024-10-07 13:36:33.103341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.206 [2024-10-07 13:36:33.117864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.206 [2024-10-07 13:36:33.117914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.206 [2024-10-07 13:36:33.118050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.206 [2024-10-07 13:36:33.118080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.206 [2024-10-07 13:36:33.118097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.206 [2024-10-07 13:36:33.118187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.206 [2024-10-07 13:36:33.118219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.206 [2024-10-07 13:36:33.118237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.206 [2024-10-07 13:36:33.118255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.206 [2024-10-07 13:36:33.118282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.206 [2024-10-07 13:36:33.118300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.118313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.118325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.118350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.118367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.118380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.118393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.118416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.127950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.207 [2024-10-07 13:36:33.128131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.207 [2024-10-07 13:36:33.128160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.207 [2024-10-07 13:36:33.128178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.207 [2024-10-07 13:36:33.130552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.207 [2024-10-07 13:36:33.134757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.207 [2024-10-07 13:36:33.134799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.134820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.134833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.135355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.135496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.207 [2024-10-07 13:36:33.135525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.207 [2024-10-07 13:36:33.135542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.207 [2024-10-07 13:36:33.136045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.207 [2024-10-07 13:36:33.136291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.136315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.136329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.136382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.138147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.207 [2024-10-07 13:36:33.138295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.207 [2024-10-07 13:36:33.138325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.207 [2024-10-07 13:36:33.138342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.207 [2024-10-07 13:36:33.138367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.207 [2024-10-07 13:36:33.138391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.138406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.138420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.138444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.146203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.207 [2024-10-07 13:36:33.146464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.207 [2024-10-07 13:36:33.146495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.207 [2024-10-07 13:36:33.146513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.207 [2024-10-07 13:36:33.146621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.207 [2024-10-07 13:36:33.146751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.146788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.146803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.207 [2024-10-07 13:36:33.146923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.207 [2024-10-07 13:36:33.151434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.207 [2024-10-07 13:36:33.151818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.207 [2024-10-07 13:36:33.151850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.207 [2024-10-07 13:36:33.151871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.207 [2024-10-07 13:36:33.151925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.207 [2024-10-07 13:36:33.152111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.207 [2024-10-07 13:36:33.152135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.207 [2024-10-07 13:36:33.152150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.152201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.156647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.156797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.156827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.208 [2024-10-07 13:36:33.156850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.156877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.156902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.156917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.156930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.156955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.161717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.161957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.161987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.208 [2024-10-07 13:36:33.162005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.162112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.162224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.162260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.162274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.165535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.167288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.167430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.167460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.208 [2024-10-07 13:36:33.167477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.167503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.167527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.167542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.167556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.167580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.172307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.172466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.172495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.208 [2024-10-07 13:36:33.172513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.172538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.172563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.172584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.172598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.172623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.181619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.182200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.182247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.208 [2024-10-07 13:36:33.182272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.182508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.182743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.182768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.182783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.182837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.182863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.182999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.183027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.208 [2024-10-07 13:36:33.183044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.183238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.208 [2024-10-07 13:36:33.183310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.208 [2024-10-07 13:36:33.183331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.208 [2024-10-07 13:36:33.183344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.208 [2024-10-07 13:36:33.183384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.208 [2024-10-07 13:36:33.192810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.208 [2024-10-07 13:36:33.193065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.208 [2024-10-07 13:36:33.193096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.208 [2024-10-07 13:36:33.193114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.208 [2024-10-07 13:36:33.195405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.195821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.195874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.195906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.195921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.196920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.197046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.197075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.209 [2024-10-07 13:36:33.197092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.197527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.197775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.197799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.197814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.197865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.202907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.203101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.203129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.209 [2024-10-07 13:36:33.203146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.207386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.207603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.207631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.207645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.207768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.207794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.208337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.208368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.209 [2024-10-07 13:36:33.208385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.210381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.210701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.210730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.210745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.211560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.213000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.213236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.213267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.209 [2024-10-07 13:36:33.213284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.213315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.213340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.213354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.213368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.213394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.217861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.218010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.218037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.209 [2024-10-07 13:36:33.218055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.218760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.218927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.218949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.218962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.219068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.225290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.225610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.225642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.209 [2024-10-07 13:36:33.225660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.225722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.225751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.225767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.225780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.225963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.227944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.228091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.228118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.209 [2024-10-07 13:36:33.228134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.209 [2024-10-07 13:36:33.228160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.209 [2024-10-07 13:36:33.228184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.209 [2024-10-07 13:36:33.228200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.209 [2024-10-07 13:36:33.228219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.209 [2024-10-07 13:36:33.228244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.209 [2024-10-07 13:36:33.237254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.209 [2024-10-07 13:36:33.237521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.209 [2024-10-07 13:36:33.237552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.209 [2024-10-07 13:36:33.237571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.237692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.239974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.240001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.240016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.241075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.241429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.241790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.241821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.210 [2024-10-07 13:36:33.241838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.241890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.242351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.242390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.242404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.242637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.247602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.247740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.247769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.210 [2024-10-07 13:36:33.247786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.247812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.247853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.247874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.247888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.247912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.254794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.254933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.254967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.210 [2024-10-07 13:36:33.254985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.255010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.255035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.255050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.255064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.255088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.258870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.259062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.259092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.210 [2024-10-07 13:36:33.259109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.259135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.259159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.259174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.259187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.259213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.270356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.270408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.270536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.270564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.210 [2024-10-07 13:36:33.270580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.270698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.270725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.210 [2024-10-07 13:36:33.270742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.270761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.270787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.270806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.270819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.270832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.270857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.270883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.270897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.270911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.270935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.282898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.282932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.283151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.283181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.210 [2024-10-07 13:36:33.283198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.283334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.283360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.210 [2024-10-07 13:36:33.283376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.210 [2024-10-07 13:36:33.283483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.283509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.210 [2024-10-07 13:36:33.283627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.283648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.283663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.283690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.210 [2024-10-07 13:36:33.283705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.210 [2024-10-07 13:36:33.283717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.210 [2024-10-07 13:36:33.283931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.283953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.210 [2024-10-07 13:36:33.293010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.293057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.210 [2024-10-07 13:36:33.293191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.210 [2024-10-07 13:36:33.293218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.210 [2024-10-07 13:36:33.293234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.293352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.293378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.211 [2024-10-07 13:36:33.293394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.293419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.293446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.293464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.293478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.293491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.293516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.293534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.293546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.293559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.293597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.303696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.303729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.303873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.303902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.211 [2024-10-07 13:36:33.303920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.304002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.304027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.211 [2024-10-07 13:36:33.304043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.304229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.304258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.304487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.304511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.304526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.304544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.304559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.304572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.304622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.304643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.319384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.319417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.319551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.319586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.211 [2024-10-07 13:36:33.319604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.319725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.319752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.211 [2024-10-07 13:36:33.319769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.319795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.319817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.319838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.319853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.319867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.319884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.319900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.319913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.319938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.319953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.211 [2024-10-07 13:36:33.332587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.332621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.211 [2024-10-07 13:36:33.332914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.332944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.211 [2024-10-07 13:36:33.332962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.333049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.211 [2024-10-07 13:36:33.333076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.211 [2024-10-07 13:36:33.333092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.211 [2024-10-07 13:36:33.333119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.333140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.211 [2024-10-07 13:36:33.333161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.333177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.333190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.211 [2024-10-07 13:36:33.333208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.211 [2024-10-07 13:36:33.333222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.211 [2024-10-07 13:36:33.333241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.333266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.333283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.346182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.346216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.346462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.346491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.212 [2024-10-07 13:36:33.346508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.346593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.346619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.346635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.346782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.346811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.347048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.347071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.347086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.347103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.347119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.347132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.347196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.347232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.359738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.359773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.360064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.360097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.360115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.360219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.360245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.212 [2024-10-07 13:36:33.360260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.360464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.360499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.360549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.360570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.360585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.360602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.360616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.360629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.360654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.360681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.375177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.375212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.376054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.376086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.212 [2024-10-07 13:36:33.376103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.376217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.376242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.376258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.376733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.376766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.377043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.377068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.377083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.377116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.377132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.377145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.377380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.377404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.390817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.390852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.391352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.391383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.391406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.391547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.391572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.212 [2024-10-07 13:36:33.391589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.391815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.391845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.392046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.392070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.392085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.392102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.392117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.392130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.392343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.392368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.406550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.406584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.406750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.406780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.212 [2024-10-07 13:36:33.406797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.406881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.406907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.406923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.406948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.406970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.212 [2024-10-07 13:36:33.406991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.407007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.407020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.407037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.212 [2024-10-07 13:36:33.407051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.212 [2024-10-07 13:36:33.407070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.212 [2024-10-07 13:36:33.407096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.407113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.212 [2024-10-07 13:36:33.421840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.421875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.212 [2024-10-07 13:36:33.422022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.422051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.212 [2024-10-07 13:36:33.422068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.212 [2024-10-07 13:36:33.422178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.212 [2024-10-07 13:36:33.422204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.422219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.422245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.422267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.422288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.422303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.422318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.422336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.422350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.422363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.422388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.422405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.437642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.437700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.438064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.438095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.438113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.438195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.438220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.213 [2024-10-07 13:36:33.438237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.438440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.438469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.438686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.438710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.438724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.438741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.438756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.438770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.438836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.438871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.453230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.453263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.453613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.453645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.213 [2024-10-07 13:36:33.453663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.453786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.453812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.453830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.454184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.454229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.454301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.454321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.454351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.454369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.454384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.454397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.454579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.454602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.468787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.468820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.469179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.469211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.469234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.469322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.469348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.213 [2024-10-07 13:36:33.469364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.469583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.469612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.469822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.469849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.469864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.469882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.469897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.469910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.469960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.469981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.484456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.484488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.484837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.484869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.213 [2024-10-07 13:36:33.484886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.484969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.484994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.485011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.485216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.485244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.485462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.485487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.485502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.485520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.485535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.485548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.485798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.485822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.499812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.499845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.213 [2024-10-07 13:36:33.499993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.500022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.213 [2024-10-07 13:36:33.500039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.500137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.213 [2024-10-07 13:36:33.500162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.213 [2024-10-07 13:36:33.500178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.213 [2024-10-07 13:36:33.500203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.500226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.213 [2024-10-07 13:36:33.500248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.500263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.500276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.500293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.213 [2024-10-07 13:36:33.500307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.213 [2024-10-07 13:36:33.500321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.213 [2024-10-07 13:36:33.500345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.213 [2024-10-07 13:36:33.500361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.512773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.512807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.513079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.513110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.513128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.513240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.513266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.513283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.513391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.513418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.513535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.513561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.513576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.513593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.513607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.513619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.516732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.516760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.522888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.522935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.523112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.523140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.523158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.523274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.523300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.523317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.523335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.523361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.523380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.523394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.523407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.523432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.523450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.523462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.523476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.523499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.532972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.533184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.533213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.533231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.533270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.533308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.533339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.533356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.533384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.533409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.533579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.533605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.533621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.533660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.533694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.533710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.533724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.533748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.547212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.547246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.547780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.547811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.547828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.547913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.547939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.547955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.548325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.548368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.548441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.548462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.548476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.548493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.548508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.548521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.548717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.548761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.563114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.563147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.563718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.563750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.563768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.563854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.563880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.563896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.564113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.564142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.564359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.564383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.564397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.564415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.214 [2024-10-07 13:36:33.564430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.214 [2024-10-07 13:36:33.564444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.214 [2024-10-07 13:36:33.564510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.564546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.214 [2024-10-07 13:36:33.577794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.577827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.214 [2024-10-07 13:36:33.578045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.578073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.214 [2024-10-07 13:36:33.578090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.578169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.214 [2024-10-07 13:36:33.578195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.214 [2024-10-07 13:36:33.578212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.214 [2024-10-07 13:36:33.578238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.214 [2024-10-07 13:36:33.578260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.578281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.578296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.578315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.578333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.578347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.578360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.578385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.578416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.593888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.593923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.594722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.594755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.215 [2024-10-07 13:36:33.594773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.594890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.594917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.215 [2024-10-07 13:36:33.594933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.595541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.595571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.595816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.595842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.595857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.595875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.595891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.595904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.595956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.595978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.605622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.605656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.605869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.605898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.215 [2024-10-07 13:36:33.605916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.606022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.606053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.215 [2024-10-07 13:36:33.606070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.606182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.606209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.606333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.606356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.606385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.606403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.606417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.606430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.606546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.606568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.615753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.615803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.615931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.615958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.215 [2024-10-07 13:36:33.615975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.616093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.616119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.215 [2024-10-07 13:36:33.616135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.616155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.616181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.616199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.616213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.616226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.616252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.616268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.616283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.616312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.616335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.625841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.626029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.626058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.215 [2024-10-07 13:36:33.626076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.626287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.626364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.626415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.626433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.626447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.626630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.626734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.626763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.215 [2024-10-07 13:36:33.626779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.626831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.626859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.626874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.626888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.626913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 8455.17 IOPS, 33.03 MiB/s [2024-10-07T11:36:37.927Z] [2024-10-07 13:36:33.640108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.640143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.640524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.640556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.215 [2024-10-07 13:36:33.640574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.640672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.640700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.215 [2024-10-07 13:36:33.640717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.215 [2024-10-07 13:36:33.640923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.640951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.215 [2024-10-07 13:36:33.641167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.641190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.641211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.641230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.215 [2024-10-07 13:36:33.641245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.215 [2024-10-07 13:36:33.641258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.215 [2024-10-07 13:36:33.641321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.641356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.215 [2024-10-07 13:36:33.654104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.654138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.215 [2024-10-07 13:36:33.656164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.215 [2024-10-07 13:36:33.656196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.656213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.656301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.656330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.656356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.657025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.657056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.657477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.657503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.657516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.657534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.657549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.657562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.657804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.657829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.664418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.664451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.664704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.664733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.664751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.664840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.664867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.664897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.665576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.665605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.665655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.665681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.665712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.665729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.665743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.665755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.665779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.665795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.674534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.674583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.674745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.674775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.674792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.675107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.675138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.675155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.675175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.675380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.675407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.675421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.675434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.675500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.675522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.675535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.675566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.675590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.689500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.689540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.689864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.689897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.689915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.690027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.690052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.690069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.690276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.690304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.690352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.690372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.690386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.690403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.690417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.690430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.690613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.690636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.705013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.705063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.705638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.705678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.705697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.705785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.705811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.705826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.706043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.706071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.216 [2024-10-07 13:36:33.706118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.706139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.706153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.706176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.216 [2024-10-07 13:36:33.706192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.216 [2024-10-07 13:36:33.706205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.216 [2024-10-07 13:36:33.706401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.706429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.216 [2024-10-07 13:36:33.719566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.719599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.216 [2024-10-07 13:36:33.719754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.719784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.216 [2024-10-07 13:36:33.719801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.719893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.216 [2024-10-07 13:36:33.719918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.216 [2024-10-07 13:36:33.719933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.216 [2024-10-07 13:36:33.719958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.719979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.720002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.720017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.720031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.720047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.720061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.720074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.720100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.720116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.729686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.730372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.730519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.730548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.217 [2024-10-07 13:36:33.730565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.735816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.735849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.735867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.735892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.736569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.736596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.736609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.736622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.736735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.736758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.736772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.736785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.736809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.739770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.739884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.739912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.217 [2024-10-07 13:36:33.739929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.739969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.739993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.740008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.740022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.740046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.740673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.740825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.740856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.740873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.740899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.740923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.740937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.740951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.740976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.753187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.753222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.753538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.753571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.217 [2024-10-07 13:36:33.753589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.753679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.753706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.753723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.753945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.753974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.754022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.754057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.754072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.754089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.754119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.754133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.754316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.754339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.768318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.768352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.768726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.768757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.768774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.768863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.768889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.217 [2024-10-07 13:36:33.768906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.769110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.769139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.769338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.769361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.769375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.769392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.769412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.769426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.769492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.769513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.783349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.783384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.783900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.783934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.217 [2024-10-07 13:36:33.783952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.784122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.784148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.784164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.784549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.784580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.217 [2024-10-07 13:36:33.784825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.784852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.784867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.784886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.217 [2024-10-07 13:36:33.784900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.217 [2024-10-07 13:36:33.784914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.217 [2024-10-07 13:36:33.784980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.785000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.217 [2024-10-07 13:36:33.799512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.799546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.217 [2024-10-07 13:36:33.800329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.217 [2024-10-07 13:36:33.800361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.217 [2024-10-07 13:36:33.800379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.217 [2024-10-07 13:36:33.800515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.800540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.800556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.800802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.800832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.801032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.801055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.801070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.801088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.801103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.801116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.801319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.801342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.815104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.815138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.815861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.815893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.815910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.816020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.816046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.218 [2024-10-07 13:36:33.816061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.816291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.816319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.816520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.816543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.816558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.816575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.816590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.816603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.816679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.816717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.830327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.830361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.830605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.830639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.218 [2024-10-07 13:36:33.830656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.830780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.830807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.830823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.830849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.830871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.830892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.830908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.830922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.830939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.830953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.830966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.830991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.831024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.841553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.841587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.843551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.843584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.843602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.843714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.843741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.218 [2024-10-07 13:36:33.843757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.845985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.846018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.847007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.847032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.847045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.847062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.847076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.847096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.847365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.847389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.851917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.851948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.852073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.852101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.218 [2024-10-07 13:36:33.852118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.852233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.852259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.852275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.852301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.852323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.852345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.852361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.852373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.852390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.218 [2024-10-07 13:36:33.852405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.218 [2024-10-07 13:36:33.852418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.218 [2024-10-07 13:36:33.852443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.852459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.218 [2024-10-07 13:36:33.862151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.862185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.218 [2024-10-07 13:36:33.862352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.862381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.218 [2024-10-07 13:36:33.862399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.862510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.218 [2024-10-07 13:36:33.862536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.218 [2024-10-07 13:36:33.862552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.218 [2024-10-07 13:36:33.862746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.862781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.218 [2024-10-07 13:36:33.862831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.862851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.862865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.862883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.862897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.862910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.863103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.863126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.874672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.874707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.875081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.875113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.219 [2024-10-07 13:36:33.875130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.875269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.875295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.219 [2024-10-07 13:36:33.875312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.875814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.875847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.876168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.876194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.876209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.876227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.876242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.876256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.876495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.876518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.885409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.885442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.885675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.885705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.219 [2024-10-07 13:36:33.885728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.885836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.885862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.219 [2024-10-07 13:36:33.885879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.885985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.886012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.886141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.886162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.886175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.886191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.886221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.886234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.887261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.887287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.896054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.896087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.896223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.896252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.219 [2024-10-07 13:36:33.896268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.896405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.896431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.219 [2024-10-07 13:36:33.896446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.896472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.896494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.896515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.896530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.896543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.896560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.896574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.896593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.896620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.896636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.906171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.906221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.906411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.906439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.219 [2024-10-07 13:36:33.906456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.906574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.906600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.219 [2024-10-07 13:36:33.906616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.906635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.906920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.219 [2024-10-07 13:36:33.906963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.906977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.906990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.907056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.907077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.219 [2024-10-07 13:36:33.907090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.219 [2024-10-07 13:36:33.907120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.219 [2024-10-07 13:36:33.907144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.219 [2024-10-07 13:36:33.918536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.918569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.219 [2024-10-07 13:36:33.918692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.918722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.219 [2024-10-07 13:36:33.918740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.918817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.219 [2024-10-07 13:36:33.918843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.219 [2024-10-07 13:36:33.918859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.219 [2024-10-07 13:36:33.919114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.919158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.919235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.919257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.919285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.919304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.919319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.919332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.919514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.919537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.928842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.928875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.929031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.929059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.220 [2024-10-07 13:36:33.929076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.929185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.929211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.220 [2024-10-07 13:36:33.929227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.929253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.929275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.929296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.929311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.929325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.929342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.929356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.929369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.932047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.932075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.938972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.939017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.939214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.939243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.220 [2024-10-07 13:36:33.939260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.939368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.939395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.220 [2024-10-07 13:36:33.939411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.939430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.939565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.939592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.939606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.939620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.939748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.939771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.939786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.939799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.220 [2024-10-07 13:36:33.939903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.220 [2024-10-07 13:36:33.950489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.950523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.220 [2024-10-07 13:36:33.950745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.950775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.220 [2024-10-07 13:36:33.950792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.950871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.220 [2024-10-07 13:36:33.950896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.220 [2024-10-07 13:36:33.950912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.220 [2024-10-07 13:36:33.951096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.951139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.220 [2024-10-07 13:36:33.951653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.220 [2024-10-07 13:36:33.951702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.220 [2024-10-07 13:36:33.951718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.951736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.951750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.951764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.951990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.952014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.963736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.963770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.964228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.964260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.221 [2024-10-07 13:36:33.964278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.964387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.964413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.221 [2024-10-07 13:36:33.964430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.964950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.964996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.965243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.965266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.965281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.965300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.965315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.965328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.965541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.965565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.976415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.976448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.977504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.977536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.221 [2024-10-07 13:36:33.977555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.977640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.977673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.221 [2024-10-07 13:36:33.977691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.978613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.978644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.979472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.979503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.979519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.979536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.979551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.979564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.979961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.979986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.986531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.986895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.987003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.987031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.221 [2024-10-07 13:36:33.987047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.987397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.987427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.221 [2024-10-07 13:36:33.987443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.987464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.987525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.987549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.987563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.987576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.987601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.987620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.987634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.987646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.987679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.996615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.996759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.996788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.221 [2024-10-07 13:36:33.996805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.996831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.996860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.996877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.996891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.996915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:33.996978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:33.997099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:33.997125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.221 [2024-10-07 13:36:33.997141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:33.997165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:33.997188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:33.997204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:33.997216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:33.997254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:34.006916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:34.007147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:34.007180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.221 [2024-10-07 13:36:34.007199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:34.007338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.221 [2024-10-07 13:36:34.009990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.221 [2024-10-07 13:36:34.010030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.221 [2024-10-07 13:36:34.010048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.221 [2024-10-07 13:36:34.010062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.221 [2024-10-07 13:36:34.011043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.221 [2024-10-07 13:36:34.011167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-10-07 13:36:34.011210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.221 [2024-10-07 13:36:34.011228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.221 [2024-10-07 13:36:34.011723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.011946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.011972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.011987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.012044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.017479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.017645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.017717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.222 [2024-10-07 13:36:34.017740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.018104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.018177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.018198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.018213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.018239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.023389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.025539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.025572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.222 [2024-10-07 13:36:34.025589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.026279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.026554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.026580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.026595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.026809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.027787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.027932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.027960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.222 [2024-10-07 13:36:34.027977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.028161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.028231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.028252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.028265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.028305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.033530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.033694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.033723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.222 [2024-10-07 13:36:34.033746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.034198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.034260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.034280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.034307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.034334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.041912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.042521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.042553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.222 [2024-10-07 13:36:34.042571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.042797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.042856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.042876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.042890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.043073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.043642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.043797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.043826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.222 [2024-10-07 13:36:34.043843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.043869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.043893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.043908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.043922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.043946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.057451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.057596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.057734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.057764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.222 [2024-10-07 13:36:34.057780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.057872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.057906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.222 [2024-10-07 13:36:34.057924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.057944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.057971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.057990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.058004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.058017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.058060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.058082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.058095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.058128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.058151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.068985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.069033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.222 [2024-10-07 13:36:34.069290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.069319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.222 [2024-10-07 13:36:34.069336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.069422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-10-07 13:36:34.069449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.222 [2024-10-07 13:36:34.069466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.222 [2024-10-07 13:36:34.069594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.069622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.222 [2024-10-07 13:36:34.069737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.069760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.069774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.069791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.222 [2024-10-07 13:36:34.069805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.222 [2024-10-07 13:36:34.069818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.222 [2024-10-07 13:36:34.069939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.069961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.222 [2024-10-07 13:36:34.079112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.079158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.079293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.079321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.079338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.079486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.079513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.079529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.079548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.079574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.079593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.079606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.079619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.079644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.079661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.079685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.079699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.079722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.089211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.089390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.089421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.089439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.089478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.089509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.089538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.089555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.089568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.089592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.089780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.089807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.089824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.089854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.089879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.089895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.089908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.089939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.105225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.105361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.105541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.105571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.105589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.105686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.105714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.105730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.105749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.106284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.106312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.106326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.106339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.106566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.106591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.106606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.106619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.106833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.119524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.119557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.119699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.119728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.119745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.119823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.119850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.119873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.119899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.119921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.119941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.119956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.119970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.119987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.120001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.120014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.120038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.120055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.133777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.133810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.135834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.135867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.135884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.135970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.135995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.136011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.136695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.136727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.137115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.137154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.137167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.137184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.223 [2024-10-07 13:36:34.137198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.223 [2024-10-07 13:36:34.137210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.223 [2024-10-07 13:36:34.137285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.137306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.223 [2024-10-07 13:36:34.143895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.143949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.223 [2024-10-07 13:36:34.144183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.144212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.223 [2024-10-07 13:36:34.144229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.144348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-10-07 13:36:34.144376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.223 [2024-10-07 13:36:34.144392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.223 [2024-10-07 13:36:34.144411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.144437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.223 [2024-10-07 13:36:34.144456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.144469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.144481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.144506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.144523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.144535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.144548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.144570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.154976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.155010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.155178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.155208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.224 [2024-10-07 13:36:34.155225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.155340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.155366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.224 [2024-10-07 13:36:34.155383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.155408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.155430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.155466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.155486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.155499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.155522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.155553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.155567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.155592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.155623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.166952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.166988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.167386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.167418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.224 [2024-10-07 13:36:34.167435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.167519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.167546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.224 [2024-10-07 13:36:34.167562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.167924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.167970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.168045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.168065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.168079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.168097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.168111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.168124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.168307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.168332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.177416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.177450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.177691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.177724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.224 [2024-10-07 13:36:34.177742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.177854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.177881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.224 [2024-10-07 13:36:34.177897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.178012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.178040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.178171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.178193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.178207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.178223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.178237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.178249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.178385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.178406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.187839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.187873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.224 [2024-10-07 13:36:34.188109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.188140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.224 [2024-10-07 13:36:34.188158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.188241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-10-07 13:36:34.188268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.224 [2024-10-07 13:36:34.188284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.224 [2024-10-07 13:36:34.188392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.188420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.224 [2024-10-07 13:36:34.188550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.188571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.188584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.188601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.224 [2024-10-07 13:36:34.188614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.224 [2024-10-07 13:36:34.188626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.224 [2024-10-07 13:36:34.188703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.188724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.224 [2024-10-07 13:36:34.197950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.198242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.198274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.198298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.198352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.198388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.198505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.198533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.198549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.198564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.198577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.198590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.198785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.198815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.198866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.198887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.198902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.198926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.208269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.208534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.208565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.208583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.211871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.212772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.212798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.212812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.213215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.213255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.213613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.213644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.213660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.213878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.213941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.213962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.213976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.214001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.218356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.218534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.218564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.218581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.218606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.218630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.218645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.218659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.218693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.228952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.229002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.229111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.229139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.229156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.229252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.229280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.229297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.229316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.229342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.229360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.229373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.229387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.229412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.229429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.229441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.229454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.229477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.243732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.243766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.243906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.243937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.243955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.244053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.244080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.244096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.245393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.245424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.245782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.245807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.245822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.245839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.245854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.245867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.246157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.246183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.257135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.257168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.259078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.259110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.259128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.259211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.259237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.259253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.260139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.260169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.260573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.260598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.260618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.260637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.260652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.260675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.260895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.260919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.267279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.267326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.267464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.267494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.225 [2024-10-07 13:36:34.267512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.267587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-10-07 13:36:34.267614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.225 [2024-10-07 13:36:34.267630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.225 [2024-10-07 13:36:34.268068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.268096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.225 [2024-10-07 13:36:34.268128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.268144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.268157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.268175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.225 [2024-10-07 13:36:34.268189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.225 [2024-10-07 13:36:34.268202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.225 [2024-10-07 13:36:34.268227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.268244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.225 [2024-10-07 13:36:34.279213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.225 [2024-10-07 13:36:34.279247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.279483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.279513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.279531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.279641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.279688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.279711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.279737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.279759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.279780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.279795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.279808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.279825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.279839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.279852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.279877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.279893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.291466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.291516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.291858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.291890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.291908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.292029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.292057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.292074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.292278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.292308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.292356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.292378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.292392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.292408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.292423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.292436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.292618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.292642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.308093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.308133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.308298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.308329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.308346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.308457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.308483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.308500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.308526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.308548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.308570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.308585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.308598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.308615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.308630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.308643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.308679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.308697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.321833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.321867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.322191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.322222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.322240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.322325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.322352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.322368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.322574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.322603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.322814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.322838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.322853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.322876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.322892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.322904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.323108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.323133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.337371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.337420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.337933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.337975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.337992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.338076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.338102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.338118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.338322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.338352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.338861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.338887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.338907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.338925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.338939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.338952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.339204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.339229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.348461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.348493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.226 [2024-10-07 13:36:34.348692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.348724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.226 [2024-10-07 13:36:34.348741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.348881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-10-07 13:36:34.348908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.226 [2024-10-07 13:36:34.348930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.226 [2024-10-07 13:36:34.349046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.349074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.226 [2024-10-07 13:36:34.352066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.352094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.352113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.352130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.226 [2024-10-07 13:36:34.352145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.226 [2024-10-07 13:36:34.352157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.226 [2024-10-07 13:36:34.353199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.226 [2024-10-07 13:36:34.353224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.358576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.358622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.358761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.358791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.227 [2024-10-07 13:36:34.358809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.358924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.358951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.227 [2024-10-07 13:36:34.358967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.358987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.359013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.359032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.359045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.359058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.359083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.359100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.359113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.359126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.359165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.368818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.368851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.369021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.369051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.227 [2024-10-07 13:36:34.369069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.369170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.369197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.227 [2024-10-07 13:36:34.369213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.369238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.369260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.369281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.369296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.369309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.369326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.369340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.369353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.369535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.369560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.382995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.383028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.383303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.383334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.227 [2024-10-07 13:36:34.383352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.383453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.383480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.227 [2024-10-07 13:36:34.383497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.383829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.383882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.384433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.384472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.384491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.384522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.384543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.384557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.384800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.384825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.396880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.396915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.397270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.397300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.227 [2024-10-07 13:36:34.397319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.397424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.397451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.227 [2024-10-07 13:36:34.397468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.397953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.397984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.227 [2024-10-07 13:36:34.398328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.398353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.398367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.398385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.227 [2024-10-07 13:36:34.398400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.227 [2024-10-07 13:36:34.398413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.227 [2024-10-07 13:36:34.398649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.398684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.227 [2024-10-07 13:36:34.407189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.407221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.227 [2024-10-07 13:36:34.407444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.407474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.227 [2024-10-07 13:36:34.407492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.407577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-10-07 13:36:34.407604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.227 [2024-10-07 13:36:34.407621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.227 [2024-10-07 13:36:34.407653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.407685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.407708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.407724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.407737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.407754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.407768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.407782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.407806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.407823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.417299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.417361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.417487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.417516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.228 [2024-10-07 13:36:34.417533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.417870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.417900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.228 [2024-10-07 13:36:34.417917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.417936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.418075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.418101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.418114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.418128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.418235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.418274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.418287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.418300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.418413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.428914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.428948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.429118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.429154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.228 [2024-10-07 13:36:34.429172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.429285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.429311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.228 [2024-10-07 13:36:34.429327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.429510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.429539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.429602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.429622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.429662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.429692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.429707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.429719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.430199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.430223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.442370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.442404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.442830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.442862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.228 [2024-10-07 13:36:34.442879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.442973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.442999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.228 [2024-10-07 13:36:34.443015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.443249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.443280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.443330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.443351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.443365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.443384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.443398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.443416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.443903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.443929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.228 [2024-10-07 13:36:34.452776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.452808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.228 [2024-10-07 13:36:34.453003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.453033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.228 [2024-10-07 13:36:34.453051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.453160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.228 [2024-10-07 13:36:34.453187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.228 [2024-10-07 13:36:34.453204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.228 [2024-10-07 13:36:34.456021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.456053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.228 [2024-10-07 13:36:34.456952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.456994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.228 [2024-10-07 13:36:34.457008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.228 [2024-10-07 13:36:34.457025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.228 [2024-10-07 13:36:34.457038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.229 [2024-10-07 13:36:34.457051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.229 [2024-10-07 13:36:34.457805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.457832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.463048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.463079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.463263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.229 [2024-10-07 13:36:34.463293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.229 [2024-10-07 13:36:34.463310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.229 [2024-10-07 13:36:34.463444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.229 [2024-10-07 13:36:34.463482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.229 [2024-10-07 13:36:34.463499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.229 [2024-10-07 13:36:34.463523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.229 [2024-10-07 13:36:34.463551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.229 [2024-10-07 13:36:34.463573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.229 [2024-10-07 13:36:34.463588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.229 [2024-10-07 13:36:34.463602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.229 [2024-10-07 13:36:34.463619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.229 [2024-10-07 13:36:34.463633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.229 [2024-10-07 13:36:34.463656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.229 [2024-10-07 13:36:34.463706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.463723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.473198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.473247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.473380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.229 [2024-10-07 13:36:34.473410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.229 [2024-10-07 13:36:34.473427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.229 [2024-10-07 13:36:34.473570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.229 [2024-10-07 13:36:34.473597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.229 [2024-10-07 13:36:34.473613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.229 [2024-10-07 13:36:34.473631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.229 [2024-10-07 13:36:34.473890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.229 [2024-10-07 13:36:34.473933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.229 [2024-10-07 13:36:34.473947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.229 [2024-10-07 13:36:34.473960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.229 [2024-10-07 13:36:34.474025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.474061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.229 [2024-10-07 13:36:34.474074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.229 [2024-10-07 13:36:34.474087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.229 [2024-10-07 13:36:34.474112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.229 [2024-10-07 13:36:34.485830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.485864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.229 [2024-10-07 13:36:34.486058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.229 [2024-10-07 13:36:34.486088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.229 [2024-10-07 13:36:34.486111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.229 [2024-10-07 13:36:34.486220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.486247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.230 [2024-10-07 13:36:34.486263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.486517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.486547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.486604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.486627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.486641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.486659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.486685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.486699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.486742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.486762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.496284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.496316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.496453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.496482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.230 [2024-10-07 13:36:34.496499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.496604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.496631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.230 [2024-10-07 13:36:34.496647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.499292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.499324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.500199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.500224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.500246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.500263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.500277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.500296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.500833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.500858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.506396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.506442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.506573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.506603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.230 [2024-10-07 13:36:34.506620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.506731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.506759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.230 [2024-10-07 13:36:34.506775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.506794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.506820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.506838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.506852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.506864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.506889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.506907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.506919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.230 [2024-10-07 13:36:34.506933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.230 [2024-10-07 13:36:34.506955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.230 [2024-10-07 13:36:34.516568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.516617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.230 [2024-10-07 13:36:34.516760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.516790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.230 [2024-10-07 13:36:34.516807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.516924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.230 [2024-10-07 13:36:34.516952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.230 [2024-10-07 13:36:34.516968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.230 [2024-10-07 13:36:34.516987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.517013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.230 [2024-10-07 13:36:34.517038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.230 [2024-10-07 13:36:34.517052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.231 [2024-10-07 13:36:34.517064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.231 [2024-10-07 13:36:34.517311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.231 [2024-10-07 13:36:34.517336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.517351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.517380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.517445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.529235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.529283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.529785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.529818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.232 [2024-10-07 13:36:34.529835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.529916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.529941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.232 [2024-10-07 13:36:34.529957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.530174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.530204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.530777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.530802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.530822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.530839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.530854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.530866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.531091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.531116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.539954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.540001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.540155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.540185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.232 [2024-10-07 13:36:34.540211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.540286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.540313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.232 [2024-10-07 13:36:34.540329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.543189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.543221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.543885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.543909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.543930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.543963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.543978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.543991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.544738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.544779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.550252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.550283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.550468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.550497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.232 [2024-10-07 13:36:34.550514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.550650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.550685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.232 [2024-10-07 13:36:34.550703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.550729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.550750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.550771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.550786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.550800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.550816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.550831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.550843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.550874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.550891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.560451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.560484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.232 [2024-10-07 13:36:34.560603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.560633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.232 [2024-10-07 13:36:34.560651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.560770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.232 [2024-10-07 13:36:34.560797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.232 [2024-10-07 13:36:34.560814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.232 [2024-10-07 13:36:34.560840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.560862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.232 [2024-10-07 13:36:34.560883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.560897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.560911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.560928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.232 [2024-10-07 13:36:34.560942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.232 [2024-10-07 13:36:34.560955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.232 [2024-10-07 13:36:34.560980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.232 [2024-10-07 13:36:34.561012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.573363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.573396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.573595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.573626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.233 [2024-10-07 13:36:34.573644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.573734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.573762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.233 [2024-10-07 13:36:34.573779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.574290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.574320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.574571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.574601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.574616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.574635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.574650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.574663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.574892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.574917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.583748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.583781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.584057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.584088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.233 [2024-10-07 13:36:34.584105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.584196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.584223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.233 [2024-10-07 13:36:34.584239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.584346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.584374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.587260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.587287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.587308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.587325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.587340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.587352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.588230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.588270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.594134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.594166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.594359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.594390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.233 [2024-10-07 13:36:34.594407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.594553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.594580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.233 [2024-10-07 13:36:34.594597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.594623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.594645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.594673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.594689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.594703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.594719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.594734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.594747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.594771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.594788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.233 [2024-10-07 13:36:34.604247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.604302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.233 [2024-10-07 13:36:34.604433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.604463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.233 [2024-10-07 13:36:34.604481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.604752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.233 [2024-10-07 13:36:34.604805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.233 [2024-10-07 13:36:34.604822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.233 [2024-10-07 13:36:34.604841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.604893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.233 [2024-10-07 13:36:34.604916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.233 [2024-10-07 13:36:34.604929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.233 [2024-10-07 13:36:34.604942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.233 [2024-10-07 13:36:34.605124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.234 [2024-10-07 13:36:34.605150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.234 [2024-10-07 13:36:34.605182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.234 [2024-10-07 13:36:34.605195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.234 [2024-10-07 13:36:34.605264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.234 [2024-10-07 13:36:34.617263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.234 [2024-10-07 13:36:34.617298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.234 [2024-10-07 13:36:34.617682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.234 [2024-10-07 13:36:34.617713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.234 [2024-10-07 13:36:34.617730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.234 [2024-10-07 13:36:34.617839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.234 [2024-10-07 13:36:34.617864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.234 [2024-10-07 13:36:34.617880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.234 [2024-10-07 13:36:34.618436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.234 [2024-10-07 13:36:34.618465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.234 [2024-10-07 13:36:34.618725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.234 [2024-10-07 13:36:34.618749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.618764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.618781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.618796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.618809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.619012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.619037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.628705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.628739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.628965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.628995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.235 [2024-10-07 13:36:34.629012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.629122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.629149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.235 [2024-10-07 13:36:34.629165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.629287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.629315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.629432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.629473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.629487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.629504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.629518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.629530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.629660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.629693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 8458.77 IOPS, 33.04 MiB/s [2024-10-07T11:36:37.947Z] [2024-10-07 13:36:34.638944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.638976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.639168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.639198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.235 [2024-10-07 13:36:34.639216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.639324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.639351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.235 [2024-10-07 13:36:34.639367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.641612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.641644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.641778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.641802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.641815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.641833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.641847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.641860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.641899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.641918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.649109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.649141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.649280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.649310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.235 [2024-10-07 13:36:34.649327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.649442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.649473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.235 [2024-10-07 13:36:34.649490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.649683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.649712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.649777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.649813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.649828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.649845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.649860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.649872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.650055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.650080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.235 [2024-10-07 13:36:34.661663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.661705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.235 [2024-10-07 13:36:34.662052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.662084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.235 [2024-10-07 13:36:34.662101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.662237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.235 [2024-10-07 13:36:34.662264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.235 [2024-10-07 13:36:34.662280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.235 [2024-10-07 13:36:34.662797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.662829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.235 [2024-10-07 13:36:34.663150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.663174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.663188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.663206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.235 [2024-10-07 13:36:34.663220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.235 [2024-10-07 13:36:34.663247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.235 [2024-10-07 13:36:34.663476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.236 [2024-10-07 13:36:34.663501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.236 [2024-10-07 13:36:34.672455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.236 [2024-10-07 13:36:34.672488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.236 [2024-10-07 13:36:34.672690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.236 [2024-10-07 13:36:34.672721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.236 [2024-10-07 13:36:34.672739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.236 [2024-10-07 13:36:34.672828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.236 [2024-10-07 13:36:34.672855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.236 [2024-10-07 13:36:34.672871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.236 [2024-10-07 13:36:34.672978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.236 [2024-10-07 13:36:34.673005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.236 [2024-10-07 13:36:34.673136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.236 [2024-10-07 13:36:34.673157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.236 [2024-10-07 13:36:34.673170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.236 [2024-10-07 13:36:34.673186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.236 [2024-10-07 13:36:34.673200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.236 [2024-10-07 13:36:34.673212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.236 [2024-10-07 13:36:34.674249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.236 [2024-10-07 13:36:34.674274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.236 [2024-10-07 13:36:34.683689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.236 [2024-10-07 13:36:34.683721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.237 [2024-10-07 13:36:34.683887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.237 [2024-10-07 13:36:34.683917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.237 [2024-10-07 13:36:34.683934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.237 [2024-10-07 13:36:34.684016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.237 [2024-10-07 13:36:34.684043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.237 [2024-10-07 13:36:34.684060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.237 [2024-10-07 13:36:34.686031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.237 [2024-10-07 13:36:34.686061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.237 [2024-10-07 13:36:34.686154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.237 [2024-10-07 13:36:34.686176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.237 [2024-10-07 13:36:34.686196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.237 [2024-10-07 13:36:34.686215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.237 [2024-10-07 13:36:34.686229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.237 [2024-10-07 13:36:34.686242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.237 [2024-10-07 13:36:34.686268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.237 [2024-10-07 13:36:34.686285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.237 [2024-10-07 13:36:34.693800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.237 [2024-10-07 13:36:34.693850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.237 [2024-10-07 13:36:34.694012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.237 [2024-10-07 13:36:34.694041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.237 [2024-10-07 13:36:34.694058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.237 [2024-10-07 13:36:34.694173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.237 [2024-10-07 13:36:34.694201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.237 [2024-10-07 13:36:34.694217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.237 [2024-10-07 13:36:34.694236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.237 [2024-10-07 13:36:34.694262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.237 [2024-10-07 13:36:34.694280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.237 [2024-10-07 13:36:34.694294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.237 [2024-10-07 13:36:34.694307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.237 [2024-10-07 13:36:34.694331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.237 [2024-10-07 13:36:34.694349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.237 [2024-10-07 13:36:34.694362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.237 [2024-10-07 13:36:34.694374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.237 [2024-10-07 13:36:34.694396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.237 [2024-10-07 13:36:34.707258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.237 [2024-10-07 13:36:34.707291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.237 [2024-10-07 13:36:34.707553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.238 [2024-10-07 13:36:34.707584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.238 [2024-10-07 13:36:34.707601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.238 [2024-10-07 13:36:34.707711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.238 [2024-10-07 13:36:34.707739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.238 [2024-10-07 13:36:34.707762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.238 [2024-10-07 13:36:34.708040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.238 [2024-10-07 13:36:34.708069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.238 [2024-10-07 13:36:34.708237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.238 [2024-10-07 13:36:34.708263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.238 [2024-10-07 13:36:34.708277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.238 [2024-10-07 13:36:34.708295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.238 [2024-10-07 13:36:34.708309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.238 [2024-10-07 13:36:34.708322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.238 [2024-10-07 13:36:34.708361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.238 [2024-10-07 13:36:34.708380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.238 [2024-10-07 13:36:34.721460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.238 [2024-10-07 13:36:34.721494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.238 [2024-10-07 13:36:34.721891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.238 [2024-10-07 13:36:34.721928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.238 [2024-10-07 13:36:34.721946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.238 [2024-10-07 13:36:34.722064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.238 [2024-10-07 13:36:34.722091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.238 [2024-10-07 13:36:34.722108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.238 [2024-10-07 13:36:34.722451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.238 [2024-10-07 13:36:34.722507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.238 [2024-10-07 13:36:34.722753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.238 [2024-10-07 13:36:34.722778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.238 [2024-10-07 13:36:34.722793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.238 [2024-10-07 13:36:34.722810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.238 [2024-10-07 13:36:34.722825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.238 [2024-10-07 13:36:34.722838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.238 [2024-10-07 13:36:34.722903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.238 [2024-10-07 13:36:34.722924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.238 [2024-10-07 13:36:34.736477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.736516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.736889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.736930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.239 [2024-10-07 13:36:34.736947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.737031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.737056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.239 [2024-10-07 13:36:34.737072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.737276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.737306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.737354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.737375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.737389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.737407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.737421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.737434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.737616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.737640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.747012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.747045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.747248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.747279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.239 [2024-10-07 13:36:34.747297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.747376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.747403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.239 [2024-10-07 13:36:34.747419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.747527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.747555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.747683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.747707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.747722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.747745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.747761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.747773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.750452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.750479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.757124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.757169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.757362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.757391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.239 [2024-10-07 13:36:34.757408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.757522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.757549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.239 [2024-10-07 13:36:34.757565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.757583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.757609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.757628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.757641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.757654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.757687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.757707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.757720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.757733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.757756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.767513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.767546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.767703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.767734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.239 [2024-10-07 13:36:34.767751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.767832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.767859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.239 [2024-10-07 13:36:34.767876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.768067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.768111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.239 [2024-10-07 13:36:34.768173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.768194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.768208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.768241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.239 [2024-10-07 13:36:34.768256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.239 [2024-10-07 13:36:34.768269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.239 [2024-10-07 13:36:34.768748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.768774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.239 [2024-10-07 13:36:34.781100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.781134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.239 [2024-10-07 13:36:34.781436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.781467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.239 [2024-10-07 13:36:34.781485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.239 [2024-10-07 13:36:34.781566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.239 [2024-10-07 13:36:34.781594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.239 [2024-10-07 13:36:34.781611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.782126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.782156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.782403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.782428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.782442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.782459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.782473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.782486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.782707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.782732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.791415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.791448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.792543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.792574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.240 [2024-10-07 13:36:34.792592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.792737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.792765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.240 [2024-10-07 13:36:34.792781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.794520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.794551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.795140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.795165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.795186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.795203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.795217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.795229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.795741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.795766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.801536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.801581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.801728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.801758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.240 [2024-10-07 13:36:34.801775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.801921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.801948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.240 [2024-10-07 13:36:34.801964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.801983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.802009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.802028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.802041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.802055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.802080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.802102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.802117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.802130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.802169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.811637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.811815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.811862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.240 [2024-10-07 13:36:34.811880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.811906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.811938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.812049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.812077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.240 [2024-10-07 13:36:34.812094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.240 [2024-10-07 13:36:34.812110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.812123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.812137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.812162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.812183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.240 [2024-10-07 13:36:34.812205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.240 [2024-10-07 13:36:34.812220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.240 [2024-10-07 13:36:34.812233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.240 [2024-10-07 13:36:34.812257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.240 [2024-10-07 13:36:34.824059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.824093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.240 [2024-10-07 13:36:34.824726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.240 [2024-10-07 13:36:34.824758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.241 [2024-10-07 13:36:34.824775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.824857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.824883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.241 [2024-10-07 13:36:34.824899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.825269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.825318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.825397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.825417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.825445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.825463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.825477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.825491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.825517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.825533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.838681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.838715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.839307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.839338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.241 [2024-10-07 13:36:34.839356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.839438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.839463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.241 [2024-10-07 13:36:34.839479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.840308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.840339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.840753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.840777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.840791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.840815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.840829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.840842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.841085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.841111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.855213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.855246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.855566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.855602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.241 [2024-10-07 13:36:34.855621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.855728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.855756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.241 [2024-10-07 13:36:34.855772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.856091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.856135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.856688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.856714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.856745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.856762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.856776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.856788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.857025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.857050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.871613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.871662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.872070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.872101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.241 [2024-10-07 13:36:34.872120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.872258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.872285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.241 [2024-10-07 13:36:34.872302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.872505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.872535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.241 [2024-10-07 13:36:34.872746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.872770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.872784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.872802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.241 [2024-10-07 13:36:34.872816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.241 [2024-10-07 13:36:34.872834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.241 [2024-10-07 13:36:34.873039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.873063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.241 [2024-10-07 13:36:34.886811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.886845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.241 [2024-10-07 13:36:34.887182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.241 [2024-10-07 13:36:34.887214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.241 [2024-10-07 13:36:34.887232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.241 [2024-10-07 13:36:34.887317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.887343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.242 [2024-10-07 13:36:34.887359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.887564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.887593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.887641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.887661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.887685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.887712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.887727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.887739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.887921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.887970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.901984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.902017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.902154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.902183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.242 [2024-10-07 13:36:34.902200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.902281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.902307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.242 [2024-10-07 13:36:34.902323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.902349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.902377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.902399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.902414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.902427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.902444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.902458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.902471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.902496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.902525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.912098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.912145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.912261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.912289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.242 [2024-10-07 13:36:34.912305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.912466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.912493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.242 [2024-10-07 13:36:34.912509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.912529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.912555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.912573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.912587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.912600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.915183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.915212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.915227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.915240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.919123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.922182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.922301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.922331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.242 [2024-10-07 13:36:34.922348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.922393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.922425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.922455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.922472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.922485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.922508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.922695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.922722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.242 [2024-10-07 13:36:34.922738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.922764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.242 [2024-10-07 13:36:34.922788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.242 [2024-10-07 13:36:34.922803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.242 [2024-10-07 13:36:34.922816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.242 [2024-10-07 13:36:34.922841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.242 [2024-10-07 13:36:34.935791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.935825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.242 [2024-10-07 13:36:34.936242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.936288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.242 [2024-10-07 13:36:34.936306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.242 [2024-10-07 13:36:34.936420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.242 [2024-10-07 13:36:34.936446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.242 [2024-10-07 13:36:34.936463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.936675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.936705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.936753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.936773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.936787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.936804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.936819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.936846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.937306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.937328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.950631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.950690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.951042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.951073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.243 [2024-10-07 13:36:34.951091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.951204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.951231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.243 [2024-10-07 13:36:34.951247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.951451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.951479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.951689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.951715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.951730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.951748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.951763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.951776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.951826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.951847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.965556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.965590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.965707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.965737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.243 [2024-10-07 13:36:34.965754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.965865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.965891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.243 [2024-10-07 13:36:34.965908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.965934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.965955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.965984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.966000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.966014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.966030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.966045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.966058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.966099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.966115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.981745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.982338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.982487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.982516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.243 [2024-10-07 13:36:34.982533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.982879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.982910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.243 [2024-10-07 13:36:34.982927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.982947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.983154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.243 [2024-10-07 13:36:34.983179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.983194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.983207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.983259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.983280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.243 [2024-10-07 13:36:34.983294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.243 [2024-10-07 13:36:34.983308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.243 [2024-10-07 13:36:34.983489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.243 [2024-10-07 13:36:34.998332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.998364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.243 [2024-10-07 13:36:34.998549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.243 [2024-10-07 13:36:34.998579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.243 [2024-10-07 13:36:34.998596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.243 [2024-10-07 13:36:34.998687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-10-07 13:36:34.998713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.244 [2024-10-07 13:36:34.998729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.244 [2024-10-07 13:36:34.998754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.244 [2024-10-07 13:36:34.998776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.244 [2024-10-07 13:36:34.998797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.244 [2024-10-07 13:36:34.998812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.244 [2024-10-07 13:36:34.998825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.244 [2024-10-07 13:36:34.998842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.244 [2024-10-07 13:36:34.998858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.244 [2024-10-07 13:36:34.998871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.244 [2024-10-07 13:36:34.998895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.244 [2024-10-07 13:36:34.998912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.244 [2024-10-07 13:36:35.012896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.244 [2024-10-07 13:36:35.012931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.244 [2024-10-07 13:36:35.013738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-10-07 13:36:35.013771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.244 [2024-10-07 13:36:35.013789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.244 [2024-10-07 13:36:35.013874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.244 [2024-10-07 13:36:35.013899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.244 [2024-10-07 13:36:35.013915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.244 [2024-10-07 13:36:35.014325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.244 [2024-10-07 13:36:35.014354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.244 [2024-10-07 13:36:35.014581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.244 [2024-10-07 13:36:35.014608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.244 [2024-10-07 13:36:35.014623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.244 [2024-10-07 13:36:35.014641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.249 [2024-10-07 13:36:35.014656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.249 [2024-10-07 13:36:35.014678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.249 [2024-10-07 13:36:35.014738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.249 [2024-10-07 13:36:35.014760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.249 [2024-10-07 13:36:35.026982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.249 [2024-10-07 13:36:35.027017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.249 [2024-10-07 13:36:35.028657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.249 [2024-10-07 13:36:35.028699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.249 [2024-10-07 13:36:35.028718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.249 [2024-10-07 13:36:35.028826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.249 [2024-10-07 13:36:35.028852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.249 [2024-10-07 13:36:35.028867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.029579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.029626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.030046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.030071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.030100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.030118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.030131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.030144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.030218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.030255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.037131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.037165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.037303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.037332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.037349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.037459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.037484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.037500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.037637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.037678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.037806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.037833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.037848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.037866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.037881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.037894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.038005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.038027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.047246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.047292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.047430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.047458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.047475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.047563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.047588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.047605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.047625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.047652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.047679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.047695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.047709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.049103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.049131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.049145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.049158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.049544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.061193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.061227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.061390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.061418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.061436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.061520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.061547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.061563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.061589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.061611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.061633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.061648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.061661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.061692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.061708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.061721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.061745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.061762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.077702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.077752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.078090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.078123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.078141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.078251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.078277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.078293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.078498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.078527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.078575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.078595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.078609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.078626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.078640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.078653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.078842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.078885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.093129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.093162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.093731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.093763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.093781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.093860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.093886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.093902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.094120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.094149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.094349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.094372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.094387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.094404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.094420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.094433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.094496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.094517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.250 [2024-10-07 13:36:35.108192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.108226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.250 [2024-10-07 13:36:35.108365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.108394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.250 [2024-10-07 13:36:35.108410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.108500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.250 [2024-10-07 13:36:35.108528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.250 [2024-10-07 13:36:35.108544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.250 [2024-10-07 13:36:35.108570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.108592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.250 [2024-10-07 13:36:35.108613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.250 [2024-10-07 13:36:35.108628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.250 [2024-10-07 13:36:35.108648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.250 [2024-10-07 13:36:35.108675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.108693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.108707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.108751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.108772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.124353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.124402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.124734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.124766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.251 [2024-10-07 13:36:35.124784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.124894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.124921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.251 [2024-10-07 13:36:35.124937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.125154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.125182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.125230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.125251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.125265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.125283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.125296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.125309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.125504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.125527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.140485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.140519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.140767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.140797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.251 [2024-10-07 13:36:35.140814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.140917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.140949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.251 [2024-10-07 13:36:35.140966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.140992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.141013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.141035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.141050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.141063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.141080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.141094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.141107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.141132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.141164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.251 [2024-10-07 13:36:35.155254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.155286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.251 [2024-10-07 13:36:35.155388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.155417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.251 [2024-10-07 13:36:35.155434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.155548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.251 [2024-10-07 13:36:35.155574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.251 [2024-10-07 13:36:35.155590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.251 [2024-10-07 13:36:35.155616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.155637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.251 [2024-10-07 13:36:35.155659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.155683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.155698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.251 [2024-10-07 13:36:35.155714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.251 [2024-10-07 13:36:35.155728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.251 [2024-10-07 13:36:35.155741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.155767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.155783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.169601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.169635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.171028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.171061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.252 [2024-10-07 13:36:35.171079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.171191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.171217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.252 [2024-10-07 13:36:35.171232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.171959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.172005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.172256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.172280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.172294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.172313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.172328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.172341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.172545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.172569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.180046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.180079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.180267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.180296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.252 [2024-10-07 13:36:35.180313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.180424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.180450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.252 [2024-10-07 13:36:35.180466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.180925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.180955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.180994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.181009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.181031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.181048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.181062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.181075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.181098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.181113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.190176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.190394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.190626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.190656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.252 [2024-10-07 13:36:35.190682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.190821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.190849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.252 [2024-10-07 13:36:35.190865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.190885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.190911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.190930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.190944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.190958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.190983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.190999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.191013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.191027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.191272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.203721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.203754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.252 [2024-10-07 13:36:35.203864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.203892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.252 [2024-10-07 13:36:35.203909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.203999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.252 [2024-10-07 13:36:35.204026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.252 [2024-10-07 13:36:35.204047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.252 [2024-10-07 13:36:35.204074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.204096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.252 [2024-10-07 13:36:35.204117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.204148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.204161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.204178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.252 [2024-10-07 13:36:35.204191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.252 [2024-10-07 13:36:35.204204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.252 [2024-10-07 13:36:35.204245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.204261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.252 [2024-10-07 13:36:35.219810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.219844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.220205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.220237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.253 [2024-10-07 13:36:35.220255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.220367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.220395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.253 [2024-10-07 13:36:35.220411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.220779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.220809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.220894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.220915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.220930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.220948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.220962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.220974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.221156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.221192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.235825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.235865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.236220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.236252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.253 [2024-10-07 13:36:35.236270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.236378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.236404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.253 [2024-10-07 13:36:35.236421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.236640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.236680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.236886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.236911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.236925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.236943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.236957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.236970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.237173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.237198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.251891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.251924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.252267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.252299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.253 [2024-10-07 13:36:35.252317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.252409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.252435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.253 [2024-10-07 13:36:35.252451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.252655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.252694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.252906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.252930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.252945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.252968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.252984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.252997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.253048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.253068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.267711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.267745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.268296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.268327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.253 [2024-10-07 13:36:35.268344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.268458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.253 [2024-10-07 13:36:35.268484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.253 [2024-10-07 13:36:35.268500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.253 [2024-10-07 13:36:35.268727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.268758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.253 [2024-10-07 13:36:35.268960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.268985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.268999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.269017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.253 [2024-10-07 13:36:35.269031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.253 [2024-10-07 13:36:35.269044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.253 [2024-10-07 13:36:35.269095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.269116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.253 [2024-10-07 13:36:35.283048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.283081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.253 [2024-10-07 13:36:35.283422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.283453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.254 [2024-10-07 13:36:35.283471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.283606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.283633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.254 [2024-10-07 13:36:35.283649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.284048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.284093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.284166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.284186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.284215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.284233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.284248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.284261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.284443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.284466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.298834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.298867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.299008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.299039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.254 [2024-10-07 13:36:35.299056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.299170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.299198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.254 [2024-10-07 13:36:35.299214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.299239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.299260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.299282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.299296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.299309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.299326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.299340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.299352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.299377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.299394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.314389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.314422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.314848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.314880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.254 [2024-10-07 13:36:35.314897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.315008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.315034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.254 [2024-10-07 13:36:35.315051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.315317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.315348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.315579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.315604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.315618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.315635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.315649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.315663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.315879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.315904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.330404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.330436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.330976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.331007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.254 [2024-10-07 13:36:35.331024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.331103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.331128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.254 [2024-10-07 13:36:35.331144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.331389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.331419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.254 [2024-10-07 13:36:35.331483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.331503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.331533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.331551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.254 [2024-10-07 13:36:35.331571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.254 [2024-10-07 13:36:35.331586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.254 [2024-10-07 13:36:35.331613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.331630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.254 [2024-10-07 13:36:35.345855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.345905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.254 [2024-10-07 13:36:35.346265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.346297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.254 [2024-10-07 13:36:35.346315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.254 [2024-10-07 13:36:35.346397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.254 [2024-10-07 13:36:35.346423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.254 [2024-10-07 13:36:35.346439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.346644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.346683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.346886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.346910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.346925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.346943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.346958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.346971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.347036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.347055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.360303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.360337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.360497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.360526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.360544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.360631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.360659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.360685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.360712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.360740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.360762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.360778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.360792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.360809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.360824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.360836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.360876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.360892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.375195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.375229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.376086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.376118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.376136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.376248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.376275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.376291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.376383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.376409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.376431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.376446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.376459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.376477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.376491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.376504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.376528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.376544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.388463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.388499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.391110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.391147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.391165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.391257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.391284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.391300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.392251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.392281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.392727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.392752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.392766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.392784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.392798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.392812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.393084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.393111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.398582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.398905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.399136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.399166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.399183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.399394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.399423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.399440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.399459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.399596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.399622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.399636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.399664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.399808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.399830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.399850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.399864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.401337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.408691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.408875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.408910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.408940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.408965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.408990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.409005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.409019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.409055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.409082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.409281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.409308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.409325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.409351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.409374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.409389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.409403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.409427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.418788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.418919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.418950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.418967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.421072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.423193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.423220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.423235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.424107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.424141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.424529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.424559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.424576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.424627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.424655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.424680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.424695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.424720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.430217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.430369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.430399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.430417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.430489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.430892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.430916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.430930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.430956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.436545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.436752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.436784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.255 [2024-10-07 13:36:35.436801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.436828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.436852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.436868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.436881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.255 [2024-10-07 13:36:35.436907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.255 [2024-10-07 13:36:35.440609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.255 [2024-10-07 13:36:35.440865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.255 [2024-10-07 13:36:35.440896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.255 [2024-10-07 13:36:35.440914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.255 [2024-10-07 13:36:35.440945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.255 [2024-10-07 13:36:35.440971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.255 [2024-10-07 13:36:35.440991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.255 [2024-10-07 13:36:35.441005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.441030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.447765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.448002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.448043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.448061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.448171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.450775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.450802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.450816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.451733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.452014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.452885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.452903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.453451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.453714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.453748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.453762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.453965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.457853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.457974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.458004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.458022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.458048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.458072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.458087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.458107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.458132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.462854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.463116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.463148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.463166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.463275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.463404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.463425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.463440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.465088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.468695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.469602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.469633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.469650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.470074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.470300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.470325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.470341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.470393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.472942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.473107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.473135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.473152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.473177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.473201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.473216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.473230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.473254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.481334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.481980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.482012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.482030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.482254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.482311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.482331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.482345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.482371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.483041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.483160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.483187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.483219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.483244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.483429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.483452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.483467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.483589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.492193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.492449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.492481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.492500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.492607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.492744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.492767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.492781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.492889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.498022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.498345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.498377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.498395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.498446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.498480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.498497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.498510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.498702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.502281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.502495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.502523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.502540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.502565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.502589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.502605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.502620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.502644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.511685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.511809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.511838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.511855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.511881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.511905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.511921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.511935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.511960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.512363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.512494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.512524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.512541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.512567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.512591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.512605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.512619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.512653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.521778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.521900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.521929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.521947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.521972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.521996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.522011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.522024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.522049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.527253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.527519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.527550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.256 [2024-10-07 13:36:35.527568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.527594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.527618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.527634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.527647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.527681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.532827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.533087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.533119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.256 [2024-10-07 13:36:35.533137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.256 [2024-10-07 13:36:35.533243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.256 [2024-10-07 13:36:35.533285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.256 [2024-10-07 13:36:35.533305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.256 [2024-10-07 13:36:35.533319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.256 [2024-10-07 13:36:35.533360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.256 [2024-10-07 13:36:35.542133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.256 [2024-10-07 13:36:35.542460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.256 [2024-10-07 13:36:35.542492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.542530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.542614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.542812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.542836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.542850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.542902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.542947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.543079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.543106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.543124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.543308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.543378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.543414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.543428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.543453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.556992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.557024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.557159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.557188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.557205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.557305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.557332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.557348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.557374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.557396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.557417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.557432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.557445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.557462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.557482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.557495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.557520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.557537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.567102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.567149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.567260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.567287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.567304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.567428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.567454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.567470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.567489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.570195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.570225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.570239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.570253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.570705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.570733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.570748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.570761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.570896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.577187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.577306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.577336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.577353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.577552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.577629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.577687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.577705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.577720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.577751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.577843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.577870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.577887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.577912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.577936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.577952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.577965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.577989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.591233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.591743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.591854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.591884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.591901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.592229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.592260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.592278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.592297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.592542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.592570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.592585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.592614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.592844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.592868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.592882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.592895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.592946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.605687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.605721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.606113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.606145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.606169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.606306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.606333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.606349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.606768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.606799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.607011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.607036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.607050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.607068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.607083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.607096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.607147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.607168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.620293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.620340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.620927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.620959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.620976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.621059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.621084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.621100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.621319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.621348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.621548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.621571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.621585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.621602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.621616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.621635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.621711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.621748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.630600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.630632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.632618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.632651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.632676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.632764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.632790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.632806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.635114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.635147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.636062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.636087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.636115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.636134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.257 [2024-10-07 13:36:35.636148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.257 [2024-10-07 13:36:35.636162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.257 [2024-10-07 13:36:35.636727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 [2024-10-07 13:36:35.636754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.257 8455.07 IOPS, 33.03 MiB/s [2024-10-07T11:36:37.969Z] [2024-10-07 13:36:35.640720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.640768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.257 [2024-10-07 13:36:35.641055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.641087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.257 [2024-10-07 13:36:35.641105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.641184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.257 [2024-10-07 13:36:35.641210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.257 [2024-10-07 13:36:35.641226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.257 [2024-10-07 13:36:35.641368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.257 [2024-10-07 13:36:35.641402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.641511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.641533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.641547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.641565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.641579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.641592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.641709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.641746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.650848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.651329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.651455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.651483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.651500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.651686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.651715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.651732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.651751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.652002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.652026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.652039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.652052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.652118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.652139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.652154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.652167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.652347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.661755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.661789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.663066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.663100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.663122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.663207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.663232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.663248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.664950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.664982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.665479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.665519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.665533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.665550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.665563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.665576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.665891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.665916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.671870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.671917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.672134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.672162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.672178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.672265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.672292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.672308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.672327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.672353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.672371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.672385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.672397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.672422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.672440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.672453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.672471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.672511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.682175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.682209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.682340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.682369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.682386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.682469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.682495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.682511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.682704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.682733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.682781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.682802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.682815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.682832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.682847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.682860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.682886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.682902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.694046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.694078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.694217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.694246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.694264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.694335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.694361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.694377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.694402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.694423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.694451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.694468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.694481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.694498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.694513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.694525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.694550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.694566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.709897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.709931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.710176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.710205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.710223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.710332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.710358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.710374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.710400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.710422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.710444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.710459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.710473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.710490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.710506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.710519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.710545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.710561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.723152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.723184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.725040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.725073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.725091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.725179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.725205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.725222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.725898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.725930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.726056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.726078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.726092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.726110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.726125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.726138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.726416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.726440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.733263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.733309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.733487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.733516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.733534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.733650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.733684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.733701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.733719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.734416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.734444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.734458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.734472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.739513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.739543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.739558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.739571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.739710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.744142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.744173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.744312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.744340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.744356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.744426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.744451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.744467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.744493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.744514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.744535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.744551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.744565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.744581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.744595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.744608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.744633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.744650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.756279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.756313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.258 [2024-10-07 13:36:35.756625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.756655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.258 [2024-10-07 13:36:35.756680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.756761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.258 [2024-10-07 13:36:35.756787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.258 [2024-10-07 13:36:35.756803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.258 [2024-10-07 13:36:35.757264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.757295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.258 [2024-10-07 13:36:35.757512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.757541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.757557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.757574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.258 [2024-10-07 13:36:35.757589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.258 [2024-10-07 13:36:35.757602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.258 [2024-10-07 13:36:35.757815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.258 [2024-10-07 13:36:35.757839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.771187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.771222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.771614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.771647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.771674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.771790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.771817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.771832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.772037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.772066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.772266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.772288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.772303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.772320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.772335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.772348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.772412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.772432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.781498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.781625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.781801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.781830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.781847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.784892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.784929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.784947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.784967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.786201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.786229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.786242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.786255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.786881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.786909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.786924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.786938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.787197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.791583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.791761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.791790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.791807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.791832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.791869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.791888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.791902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.791928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.791950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.792168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.792196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.792213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.792238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.792263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.792278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.792291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.792315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.801664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.801814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.801844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.801861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.801886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.801914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.801931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.801945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.801970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.802015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.802202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.802230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.802246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.802492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.802599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.802621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.802635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.802660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.815906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.815940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.816220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.816251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.816268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.816382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.816408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.816423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.816626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.816655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.817200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.817227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.817251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.817269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.817283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.817295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.817532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.817556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.826649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.826692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.826913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.826943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.826959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.827067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.827093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.827109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.829915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.829948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.830425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.830448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.830462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.830478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.830491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.830504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.830607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.830630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.836963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.836995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.837232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.837261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.837277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.837384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.837410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.837432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.838060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.838089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.838147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.838166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.838180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.838213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.838228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.838241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.838266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.838282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.847077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.847295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.847445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.847476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.847493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.847638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.847672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.847692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.847711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.847911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.847936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.847951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.847979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.848043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.848065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.848078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.848092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.848116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.861391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.861430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.861590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.861619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.861636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.861734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.861760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.861777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.861978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.862021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.259 [2024-10-07 13:36:35.862069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.862105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.862118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.862136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.259 [2024-10-07 13:36:35.862151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.259 [2024-10-07 13:36:35.862164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.259 [2024-10-07 13:36:35.862347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.862370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.259 [2024-10-07 13:36:35.877296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.877329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.259 [2024-10-07 13:36:35.877463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.877492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.259 [2024-10-07 13:36:35.877509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.877585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.259 [2024-10-07 13:36:35.877611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.259 [2024-10-07 13:36:35.877627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.259 [2024-10-07 13:36:35.877652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.877683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.877706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.877720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.877734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.877757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.877773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.877786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.877810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.877827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.891958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.891991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.892371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.892402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.892419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.892512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.892537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.892553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.892768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.892799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.893000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.893025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.893041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.893058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.893072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.893087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.893344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.893369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.906650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.906691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.906799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.906827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.906844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.906928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.906955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.906977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.907003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.907025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.907045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.907060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.907073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.907090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.907104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.907117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.907141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.907157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.919169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.919203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.919429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.919459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.919477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.919588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.919616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.919632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.919748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.919778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.919896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.919918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.919932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.919948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.919962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.919992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.922137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.922164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.929474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.929507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.929789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.929820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.929839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.929920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.929947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.929963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.930071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.930098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.930214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.930249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.930262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.930279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.930293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.930304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.930349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.930368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.939651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.939693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.939808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.939838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.939856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.939998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.940025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.940042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.940518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.940548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.940789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.940815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.940829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.940847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.940867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.940881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.941085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.941110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.950019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.950052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.950279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.950324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.950341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.950458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.950486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.950502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.953703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.953736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.954561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.954586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.954600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.954617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.954631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.954644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.955124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.955148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.960149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.960194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.960372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.960401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.960418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.960553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.960581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.960597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.960621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.960648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.960675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.960692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.960705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.960730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.960748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.960761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.960774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.960797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.970233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.970507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.970538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.970556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.970622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.970656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.970697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.970714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.970727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.970751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.970868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.970897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.970913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.970939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.970963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.970978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.970991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.260 [2024-10-07 13:36:35.971165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.260 [2024-10-07 13:36:35.983896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.983930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.260 [2024-10-07 13:36:35.984958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.984995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.260 [2024-10-07 13:36:35.985013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.985100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.260 [2024-10-07 13:36:35.985126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.260 [2024-10-07 13:36:35.985142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.260 [2024-10-07 13:36:35.985716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.985747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.260 [2024-10-07 13:36:35.985998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.260 [2024-10-07 13:36:35.986024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.260 [2024-10-07 13:36:35.986038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:35.986057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:35.986072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:35.986085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:35.986150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:35.986186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:35.998017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:35.998051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:35.998653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:35.998707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:35.998726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:35.998807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:35.998833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:35.998849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:35.999707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:35.999737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.000158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.000182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.000210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.000228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.000242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.000275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.000522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.000548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.008136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.008186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.008367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.008397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.008414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.008729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.008760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.008777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.008796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.008934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.008959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.008974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.008987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.009094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.009115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.009129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.009143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.009240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.018223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.018555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.018586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.018603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.018781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.018912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.018949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.018966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.018980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.019093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.019185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.019213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.019230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.019337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.019441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.019463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.019477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.021358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.028396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.028538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.028569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.028586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.028785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.028846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.028868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.028882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.028908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.028995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.029311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.029340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.029357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.029407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.029435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.029451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.029464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.029489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.041961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.042629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.042807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.042838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.042861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.043321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.043351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.043368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.043387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.043622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.043649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.043664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.043689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.043893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.043918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.043932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.043945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.043995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.052051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.052278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.052308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.052326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.052351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.052375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.052391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.052404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.052428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.056597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.057001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.057032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.057049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.057607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.057876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.057902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.057922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.058126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.062412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.062659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.062698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.062716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.062772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.062801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.062817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.062831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.062854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.066943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.067104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.067133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.067151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.067176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.067201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.067216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.067230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.067255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.072506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.072679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.072709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.072727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.072753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.072777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.072793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.072806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.073005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.077181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.077400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.077430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.077447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.077472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.077497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.077512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.077526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.077551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.085137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.085610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.085641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.085659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.085875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.085933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.085954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.085969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.085994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.087274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.087417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.087446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.261 [2024-10-07 13:36:36.087464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.087489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.087513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.087527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.087541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.087565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.095806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.096073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.261 [2024-10-07 13:36:36.096104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.261 [2024-10-07 13:36:36.096122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.261 [2024-10-07 13:36:36.097244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.261 [2024-10-07 13:36:36.097472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.261 [2024-10-07 13:36:36.097496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.261 [2024-10-07 13:36:36.097511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.261 [2024-10-07 13:36:36.097631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.261 [2024-10-07 13:36:36.097756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.261 [2024-10-07 13:36:36.100686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.100719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.100736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.101003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.101551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.101575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.101588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.101860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.105890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.106234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.106264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.106282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.106505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.106630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.106653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.106677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.106786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.109381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.109528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.109558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.109575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.109601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.109641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.109661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.109698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.109741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.116006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.116420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.116451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.116469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.116687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.116746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.116767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.116781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.116806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.119463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.119606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.119634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.119651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.119685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.119711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.119726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.119740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.119764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.129252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.129710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.129742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.129759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.129981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.130044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.130080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.130097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.130110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.130589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.130710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.130743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.130761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.130986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.131044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.131066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.131081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.131352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.140329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.140633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.140672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.140693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.140801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.140839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.141153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.141182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.141198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.141213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.141226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.141239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.141347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.141372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.141503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.141524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.141537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.143908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.150788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.151002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.151032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.151050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.151075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.151119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.151138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.151152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.151179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.151199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.151356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.151384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.151400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.151425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.151448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.151463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.151477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.151501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.161320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.161370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.161525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.161555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.161572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.161680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.161708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.161724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.161743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.161769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.161787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.161800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.161813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.161838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.161855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.161868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.161881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.161909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.172710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.172743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.173104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.173135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.173152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.173255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.173281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.173297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.173362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.173389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.173411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.173426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.173440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.173457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.173471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.173484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.173508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.173525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.189503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.189536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.189931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.189964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.189981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.190083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.190108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.190124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.190331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.190361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.190561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.190590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.190605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.190623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.262 [2024-10-07 13:36:36.190637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.262 [2024-10-07 13:36:36.190650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.262 [2024-10-07 13:36:36.190863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.190887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.262 [2024-10-07 13:36:36.204508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.204541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.262 [2024-10-07 13:36:36.204778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.204809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.262 [2024-10-07 13:36:36.204826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.204911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.262 [2024-10-07 13:36:36.204938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.262 [2024-10-07 13:36:36.204954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.262 [2024-10-07 13:36:36.204980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.205002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.262 [2024-10-07 13:36:36.205023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.205038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.205051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.205068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.205083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.205096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.205121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.205137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.214966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.214999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.215306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.215337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.215355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.215463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.215496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.215514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.218479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.218512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.219959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.219985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.220000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.220017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.220032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.220046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.220094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.220114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.225378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.225411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.225659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.225697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.225715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.225827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.225854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.225870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.225897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.225918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.225939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.225954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.225968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.225985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.226000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.226013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.226053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.226069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.235970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.236004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.236117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.236146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.236164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.236246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.236274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.236290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.236510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.236539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.236834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.236859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.236873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.236891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.236905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.236919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.236988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.237009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.249191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.249225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.249861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.249892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.249909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.250025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.250051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.250067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.250363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.250394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.250639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.250664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.250701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.250720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.250735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.250764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.250832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.250853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.260230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.260279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.260597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.260627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.260645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.260769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.260797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.260813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.260922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.260950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.261053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.261075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.261089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.261106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.261121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.261134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.263333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.263360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.270608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.270640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.270763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.270792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.270808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.270885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.270912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.270942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.270968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.270990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.271012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.271027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.271040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.271057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.271072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.271085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.271109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.271126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.281183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.281215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.281376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.281406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.281424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.281531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.281558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.281574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.281770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.281800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.282001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.282025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.282039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.282057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.282072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.282084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.282150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.282170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.293733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.293771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.293915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.293944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.293962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.294071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.294097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.294114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.294139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.294160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.294199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.294219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.294233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.294250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.294264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.294277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.295645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.295678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.304500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.304532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.304769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.304800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.304817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.304928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.304955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.304972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.307778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.307810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.308870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.308897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.308911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.308934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.308970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.308983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.309729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.309755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.314612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.314680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.263 [2024-10-07 13:36:36.314849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.314879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.263 [2024-10-07 13:36:36.314896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.315096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.263 [2024-10-07 13:36:36.315123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.263 [2024-10-07 13:36:36.315139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.263 [2024-10-07 13:36:36.315158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.315184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.263 [2024-10-07 13:36:36.315203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.315217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.315229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.315254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.263 [2024-10-07 13:36:36.315286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.263 [2024-10-07 13:36:36.315298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.263 [2024-10-07 13:36:36.315311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.263 [2024-10-07 13:36:36.315334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.324888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.324921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.325063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.325093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.325111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.325217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.325244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.325260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.325291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.325313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.325335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.325350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.325363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.325380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.325395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.325407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.325432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.325448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.338377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.338411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.339069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.339101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.339118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.339260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.339288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.339304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.339524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.339554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.339827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.339851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.339865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.339883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.339897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.339910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.339992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.340013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.349563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.349597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.349837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.349869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.349887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.349996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.350023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.350039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.350147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.350174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.351562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.351587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.351602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.351618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.351632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.351644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.353799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.353826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.359704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.359736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.359850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.359880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.359896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.360034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.360061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.360077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.360102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.360124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.360145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.360160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.360174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.360191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.360211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.360224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.360254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.360279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.369819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.370027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.370162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.370192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.370209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.370318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.370346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.370363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.370381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.370567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.370594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.370624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.370636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.370711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.370734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.370747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.370761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.370942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.382495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.382529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.383275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.383307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.383325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.383401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.383427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.383443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.383817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.383853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.384067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.384091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.384106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.384123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.384137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.384150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.384217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.384252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.392626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.392683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.392826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.392856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.392874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.392955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.392981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.392997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.395815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.395847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.396764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.396789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.396804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.396822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.396837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.396850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.397415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.397439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.402954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.402986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.403172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.403211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.403229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.403314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.403341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.403357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.403383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.403404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.403424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.403440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.403454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.403471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.403485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.403498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.403523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.403540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.413134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.413182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.264 [2024-10-07 13:36:36.413320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.413350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.264 [2024-10-07 13:36:36.413367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.413446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.264 [2024-10-07 13:36:36.413471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.264 [2024-10-07 13:36:36.413487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.264 [2024-10-07 13:36:36.413682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.413727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.264 [2024-10-07 13:36:36.413791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.413812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.413826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.413843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.264 [2024-10-07 13:36:36.413857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.264 [2024-10-07 13:36:36.413875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.264 [2024-10-07 13:36:36.414059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.414098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.264 [2024-10-07 13:36:36.425888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.425922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.426757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.426788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.426806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.427361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.427407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.427424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.427999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.428029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.428148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.428172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.428187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.428204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.428219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.428232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.428258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.428275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.436210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.436244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.436656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.436696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.436715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.436829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.436855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.436871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.437026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.437062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.437179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.437202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.437217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.437234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.437248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.437261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.437367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.437403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.446325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.446371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.446547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.446577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.446594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.446919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.446950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.446967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.446986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.447114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.447141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.447155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.447168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.447274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.447317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.447331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.447344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.447450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.457772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.457807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.458145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.458176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.458201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.458331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.458358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.458375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.458425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.458452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.458489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.458509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.458523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.458540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.458555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.458568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.458824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.458849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.472590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.472624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.473406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.473438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.473456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.473564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.473591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.473607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.473860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.473891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.474090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.474114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.474128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.474146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.474161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.474173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.474410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.474435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.482704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.484215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.484355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.484383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.484401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.489154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.489187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.489205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.489224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.489320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.489344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.489358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.489371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.489396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.489414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.489427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.489441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.489464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.493274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.493450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.493480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.493497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.493522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.493557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.493572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.493585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.493609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.494303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.494500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.494528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.494545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.494570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.494595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.494610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.494623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.494647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.505200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.505471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.505614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.505645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.505663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.505793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.505821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.505838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.505857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.506214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.506250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.506264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.506277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.506509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.506535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.506549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.506563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.506614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.517170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.517203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.517414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.517445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.517463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.517574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.517601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.517617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.517745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.517774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.520622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.520648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.520662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.520689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.520704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.520726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.521663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.521698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.527292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.527341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.527469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.527498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.527516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.527602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.527629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.265 [2024-10-07 13:36:36.527657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.527685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.527712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.527731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.527744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.527757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.527782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.527799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.527812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.527825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.265 [2024-10-07 13:36:36.527852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.265 [2024-10-07 13:36:36.537379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.537729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.265 [2024-10-07 13:36:36.537761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.265 [2024-10-07 13:36:36.537779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.265 [2024-10-07 13:36:36.537844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.265 [2024-10-07 13:36:36.537879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.265 [2024-10-07 13:36:36.537909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.265 [2024-10-07 13:36:36.537925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.265 [2024-10-07 13:36:36.537938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.537962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.538056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.538084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.538100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.538126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.538150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.538165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.538179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.538202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.551013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.551045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.551166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.551196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.551213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.551291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.551319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.551335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.551361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.551382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.551403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.551423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.551437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.551455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.551469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.551482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.551506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.551536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.566801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.566837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.567043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.567074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.567092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.567204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.567231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.567247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.567273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.567295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.567316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.567331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.567346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.567363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.567378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.567390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.567415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.567433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.579722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.579757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.579972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.580003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.580020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.580128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.580160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.580177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.580285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.580313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.580430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.580451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.580465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.580482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.580497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.580510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.582951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.582979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.589954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.589992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.590468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.590499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.590524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.590630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.590674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.590694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.591021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.591049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.591116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.591137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.591150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.591183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.591198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.591211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.591237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.591253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.600075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.600128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.600259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.600288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.600306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.600660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.600698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.600717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.600737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.600943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.600970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.600984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.600997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.601060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.601081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.601094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.601107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.601287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.614914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.614948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.615269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.615301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.615319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.615427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.615455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.615471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.615685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.615715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.615763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.615784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.615803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.615821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.615837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.615850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.616032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.616056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.630230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.630263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.630619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.630650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.630677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.630759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.630786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.630805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.631009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.631039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.631248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.631273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.631288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.631315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.631329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.631342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.631406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.631441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 8459.47 IOPS, 33.04 MiB/s [2024-10-07T11:36:37.978Z] [2024-10-07 13:36:36.642248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.642278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.642446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.642475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.642492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.642629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.642677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.642696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.642721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.642743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.642764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.642780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.642793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.642809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.642824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.642837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.642862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.642878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.652355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.652397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.266 [2024-10-07 13:36:36.652558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.652586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.266 [2024-10-07 13:36:36.652603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.652702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.266 [2024-10-07 13:36:36.652731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.266 [2024-10-07 13:36:36.652748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.266 [2024-10-07 13:36:36.652766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.652792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.266 [2024-10-07 13:36:36.652810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.652822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.652835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.652860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.266 [2024-10-07 13:36:36.652877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.266 [2024-10-07 13:36:36.652889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.266 [2024-10-07 13:36:36.652902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.266 [2024-10-07 13:36:36.652939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.662431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.662621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.662649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.662672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.662712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.662744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.662773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.662789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.662802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.662825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.662938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.662969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.662985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.663010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.663033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.663048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.663062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.663085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.672507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.672711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.672740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.672757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.672782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.672806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.672821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.672835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.672870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.672897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.673059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.673086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.673102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.673133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.673157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.673172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.673185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.673210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 00:25:56.267 Latency(us) 00:25:56.267 [2024-10-07T11:36:37.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.267 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:56.267 Verification LBA range: start 0x0 length 0x4000 00:25:56.267 NVMe0n1 : 15.05 8437.70 32.96 0.00 0.00 15102.09 3034.07 44661.57 00:25:56.267 [2024-10-07T11:36:37.979Z] =================================================================================================================== 00:25:56.267 [2024-10-07T11:36:37.979Z] Total : 8437.70 32.96 0.00 0.00 15102.09 3034.07 44661.57 00:25:56.267 [2024-10-07 13:36:36.684879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.685019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.685162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.685190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.685207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.686012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.686041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.686057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.686076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.686097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.686115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.686128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.686141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.686160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.686176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.686189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.686202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.686218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.694969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.695114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.695148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.695166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.695187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.695218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.695237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.695250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.695269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.695294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.695398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.695425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.695442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.695463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.695483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.695497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.695510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.695528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.705041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.705205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.705233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.705251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.705271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.705294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.705309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.705322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.705341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.705369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.705538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.705564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.705580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.705601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.705626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.705641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.705654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.705681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.715110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.715316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.715344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.715361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.715382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.715405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.715420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.715434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.715452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.715479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.715631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.715657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.715681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.715704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.715723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.715738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.715751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.715768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.725181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.725319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.725347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.725364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.725386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.725406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.725420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.725433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.725458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.725544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.725753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.725782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.725799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.725820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.725840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.725854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.725868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.725886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.735249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.735388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.735415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.267 [2024-10-07 13:36:36.735432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.735454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.735473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.735487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.735500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.735518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 [2024-10-07 13:36:36.735640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.267 [2024-10-07 13:36:36.735801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.267 [2024-10-07 13:36:36.735828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.267 [2024-10-07 13:36:36.735845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.267 [2024-10-07 13:36:36.735866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.267 [2024-10-07 13:36:36.735887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.267 [2024-10-07 13:36:36.735901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.267 [2024-10-07 13:36:36.735915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.267 [2024-10-07 13:36:36.735933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.267 Received shutdown signal, test time was about 15.000000 seconds 00:25:56.267 00:25:56.267 Latency(us) 00:25:56.267 [2024-10-07T11:36:37.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.267 [2024-10-07T11:36:37.979Z] =================================================================================================================== 00:25:56.267 [2024-10-07T11:36:37.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=1 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # false 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # trap - ERR 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@68 -- # print_backtrace 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp') 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:56.267 ========== Backtrace start: ========== 00:25:56.267 00:25:56.267 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh:68 -> main(["--transport=tcp"]) 00:25:56.267 ... 00:25:56.267 63 cat $testdir/try.txt 00:25:56.267 64 # if this test fails it means we didn't fail over to the second 00:25:56.267 65 count="$(grep -c "Resetting controller successful" < $testdir/try.txt)" 00:25:56.267 66 00:25:56.267 67 if ((count != 3)); then 00:25:56.267 => 68 false 00:25:56.267 69 fi 00:25:56.267 70 00:25:56.267 71 # Part 2 of the test. Start removing ports, starting with the one we are connected to, confirm that the ctrlr remains active until the final trid is removed. 00:25:56.267 72 $rootdir/build/examples/bdevperf -z -r $bdevperf_rpc_sock -q 128 -o 4096 -w verify -t 1 -f &> $testdir/try.txt & 00:25:56.267 73 bdevperf_pid=$! 00:25:56.267 ... 00:25:56.267 00:25:56.267 ========== Backtrace end ========== 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # process_shm --id 0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@808 -- # type=--id 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@809 -- # id=0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@820 -- # for n in $shm_files 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:56.267 nvmf_trace.0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@823 -- # return 0 00:25:56.267 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.267 [2024-10-07 13:36:20.384642] Starting SPDK v25.01-pre git sha1 d16db39ee / DPDK 24.03.0 initialization... 00:25:56.268 [2024-10-07 13:36:20.384754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1872271 ] 00:25:56.268 [2024-10-07 13:36:20.442804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.268 [2024-10-07 13:36:20.556923] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.268 Running I/O for 15 seconds... 00:25:56.268 8407.00 IOPS, 32.84 MiB/s [2024-10-07T11:36:37.980Z] [2024-10-07 13:36:22.742412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.742973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.742987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.268 [2024-10-07 13:36:22.743163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.743981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.743994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.268 [2024-10-07 13:36:22.744815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.268 [2024-10-07 13:36:22.744831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.744860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.744888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.744916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.744946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.744974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.744988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.745017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.745049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.269 [2024-10-07 13:36:22.745078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745208] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745746] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.745959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.745970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.745983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.745996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746101] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746138] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746185] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78248 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746241] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78256 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746299] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78264 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78272 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78280 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78288 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78296 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78304 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78312 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78320 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78328 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78336 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78344 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78352 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78360 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.746952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.746963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78368 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.746975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.746992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.747003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.747015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:25:56.269 [2024-10-07 13:36:22.747027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.269 [2024-10-07 13:36:22.747040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.269 [2024-10-07 13:36:22.747050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.269 [2024-10-07 13:36:22.747061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77544 len:8 PRP1 0x0 PRP2 0x0 00:25:56.270 [2024-10-07 13:36:22.747074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.270 [2024-10-07 13:36:22.747131] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d12030 was disconnected and freed. reset controller. 00:25:56.270 [2024-10-07 13:36:22.748447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.748514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.748697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.748726] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.748743] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.748769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.748793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.748809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.748826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.748853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.758603] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.758822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.758854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.758872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.758897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.758922] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.758937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.758951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.758975] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.768710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.768863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.768900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.768919] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.768944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.768969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.768985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.768999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.769024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.781606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.782234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.782266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.782283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.782515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.782587] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.782609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.782624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.782818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.796095] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.796273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.796304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.796322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.796348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.796372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.796388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.796402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.796427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.806180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.806367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.806396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.806414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.806439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.806470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.806487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.806501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.806525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.816266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.816388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.816416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.816432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.816457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.816480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.816494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.816508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.816532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.828901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.829511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.829543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.829561] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.829787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.829844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.829866] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.829880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.830063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.844466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.845167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.845199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.845216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.845613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.845847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.845871] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.845886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.845943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.861338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.861528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.861558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.861575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.861602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.861627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.861642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.861656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.862297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.876380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.876589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.876619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.876636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.876662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.876698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.876714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.876727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.876752] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.890696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.890850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.890879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.890896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.890922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.890961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.890980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.890993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.891018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.906025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.907322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.907354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.907378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.907937] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.908204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.908230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.908245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.908449] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.916116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.916264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.916293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.916311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.918949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.921758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.921786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.921803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.922840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.926199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.926365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.926396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.926413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.926439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.926464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.926480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.926493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.926518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.938577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.938714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.938743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.938760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.938786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.938810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.938831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.938846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.938870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.948881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.949096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.949127] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.949146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.949253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.951468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.951497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.951512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.953880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.958982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.959181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.959211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.959228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.959488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.959638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.959688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.959704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.959820] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.970412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.970729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.970761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.970779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.970829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.970858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.970873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.970886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.970912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.984883] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.985300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.985331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.270 [2024-10-07 13:36:22.985349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.270 [2024-10-07 13:36:22.985553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.270 [2024-10-07 13:36:22.985611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.270 [2024-10-07 13:36:22.985648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.270 [2024-10-07 13:36:22.985662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.270 [2024-10-07 13:36:22.985717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.270 [2024-10-07 13:36:22.995247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.270 [2024-10-07 13:36:22.995480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.270 [2024-10-07 13:36:22.995511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:22.995530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:22.995638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:22.995774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:22.995798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:22.995813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:22.995922] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.005340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.005473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.005504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.005522] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.005547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.005571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.005587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.005600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.005625] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.018968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.019325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.019358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.019376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.019640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.019726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.019749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.019764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.019948] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.033862] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.034019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.034050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.034068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.034253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.034325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.034347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.034361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.034386] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.049102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.049629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.049661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.049688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.049906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.049964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.049991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.050006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.050188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.064212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.064377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.064406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.064423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.064449] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.064474] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.064489] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.064509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.064534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.074299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.074502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.074532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.074548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.074575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.074599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.074615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.074628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.077342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.086163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.086408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.086441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.086459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.086567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.086709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.086730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.086743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.086845] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.096357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.096623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.096656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.096685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.096870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.096944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.096971] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.096985] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.097010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.107233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.109615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.109647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.109674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.110639] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.110958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.110984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.110999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.111245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.117357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.117511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.117540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.117558] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.117583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.117607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.117623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.117637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.117662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.127545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.127776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.127806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.127823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.128009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.128067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.128087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.128101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.128142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.141771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.142089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.142121] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.142154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.142359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.142423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.142444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.142458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.142484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.156422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.156599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.156628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.156645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.156678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.156706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.156721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.156735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.156760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.167416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.167705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.167736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.167753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.167863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.167988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.168009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.168038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.169003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.178797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.178961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.178991] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.179008] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.181564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.182525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.182551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.182580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.182785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.189062] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.189253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.189282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.189299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.189325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.189350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.189365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.189379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.189403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.201172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.201345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.201375] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.201393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.201419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.201444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.201460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.201474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.201498] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.211260] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.211417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.211446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.211463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.211488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.211512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.211527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.271 [2024-10-07 13:36:23.211541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.271 [2024-10-07 13:36:23.211566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.271 [2024-10-07 13:36:23.221343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.271 [2024-10-07 13:36:23.221495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.271 [2024-10-07 13:36:23.221530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.271 [2024-10-07 13:36:23.221548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.271 [2024-10-07 13:36:23.221690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.271 [2024-10-07 13:36:23.221907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.271 [2024-10-07 13:36:23.221931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.221961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.222010] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.234474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.235119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.235152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.235170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.235404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.235504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.235525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.235538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.235580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.250130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.250438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.250471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.250489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.250550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.250578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.250594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.250608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.250875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.261967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.262189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.262219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.262236] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.262344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.262461] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.262482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.262496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.262602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.272403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.272641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.272680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.272700] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.273206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.273238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.273253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.273266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.273290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.282558] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.282681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.282721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.282738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.282764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.282788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.282803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.282816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.283076] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.294211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.294443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.294475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.294494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.294604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.296779] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.296807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.296822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.297703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.304300] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.304498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.304528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.304546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.304572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.304598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.304613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.304627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.304651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.314629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.314773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.314802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.314820] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.315003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.315084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.315105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.315119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.315143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.327960] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.328615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.328647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.328674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.328898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.329125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.329151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.329166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.329217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.344201] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.344474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.344507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.344531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.345040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.345277] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.345302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.345318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.345370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.356679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.358976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.359010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.359028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.359843] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.360253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.360278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.360307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.360385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.366767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.366949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.366977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.366994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.367019] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.367044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.367058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.367071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.367095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.376881] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.377060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.377089] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.377106] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.377131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.377593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.377638] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.377655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.377886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.389863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.390474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.390506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.390524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.390754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.390812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.390832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.390847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.391039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.402806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.403046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.403078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.403096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.405471] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.406339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.406378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.406393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.406803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.412896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.413050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.413079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.413096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.413122] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.413146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.413162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.413176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.413200] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.423099] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.423261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.423292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.423310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.423335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.423363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.423378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.423393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.423877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.436386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.436755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.436787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.436805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.437009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.437082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.437118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.437132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.437324] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.448287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.448527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.448569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.448588] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.451102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.452275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.452302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.452332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.452796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.458372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.458586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.458615] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.458631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.458664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.272 [2024-10-07 13:36:23.458699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.272 [2024-10-07 13:36:23.458714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.272 [2024-10-07 13:36:23.458727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.272 [2024-10-07 13:36:23.458753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.272 [2024-10-07 13:36:23.468579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.272 [2024-10-07 13:36:23.468769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.272 [2024-10-07 13:36:23.468798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.272 [2024-10-07 13:36:23.468816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.272 [2024-10-07 13:36:23.469000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.469057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.469093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.469107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.469132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.482642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.483232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.483265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.483283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.483502] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.483559] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.483580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.483593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.483619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.497522] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.498162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.498194] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.498212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.498447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.498519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.498540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.498561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.498765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.508630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.508840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.508871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.508888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.511076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.511458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.511484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.511499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.512285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.518724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.518874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.518903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.518920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.523070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.523271] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.523295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.523309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.523417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.528809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.529139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.529171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.529190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.529242] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.529270] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.529286] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.529300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.529325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.541181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.541886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.541919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.541937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.542190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.542400] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.542440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.542455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.542506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.551275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.551461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.551491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.551508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.553172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.554986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.555013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.555027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.555673] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.561997] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.562152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.562182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.562199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.562225] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.562249] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.562265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.562278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.562302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.572097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.572230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.572259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.572277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.572477] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.572554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.572574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.572603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.572628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.587648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.587812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.587841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.587859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.587885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.587910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.587925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.587940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.587965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 8354.00 IOPS, 32.63 MiB/s [2024-10-07T11:36:37.985Z] [2024-10-07 13:36:23.601946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.602152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.602182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.602199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.602308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.602435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.602456] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.602470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.602573] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.612030] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.612210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.612239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.612256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.612282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.612306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.612322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.612336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.612366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.622113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.622275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.622305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.622322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.622522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.622592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.622612] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.622640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.622674] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.637123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.637321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.637353] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.637371] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.637397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.637422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.637437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.637451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.637475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.651129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.651515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.651547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.651565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.651781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.651998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.652022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.652037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.652088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.666411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.666829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.666866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.666884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.667089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.667161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.667182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.667196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.667237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.682499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.682879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.682911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.682928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.683133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.683191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.683212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.683225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.683251] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.697756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.697910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.697941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.697959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.697984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.698009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.698024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.698038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.698062] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.707840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.708004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.273 [2024-10-07 13:36:23.708035] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.273 [2024-10-07 13:36:23.708053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.273 [2024-10-07 13:36:23.708078] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.273 [2024-10-07 13:36:23.708108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.273 [2024-10-07 13:36:23.708124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.273 [2024-10-07 13:36:23.708137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.273 [2024-10-07 13:36:23.708162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.273 [2024-10-07 13:36:23.718079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.273 [2024-10-07 13:36:23.718279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.718309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.718326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.718351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.718375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.718390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.718404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.718428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.730575] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.730921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.730954] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.730972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.731316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.731395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.731417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.731446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.731629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.742536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.742750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.742780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.742799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.742908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.745953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.745980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.745995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.746834] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.752621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.752796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.752826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.752844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.752869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.752894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.752909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.752922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.752946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.762728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.762877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.762908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.762925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.763110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.763169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.763191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.763205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.763230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.776924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.777362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.777394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.777411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.777621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.777687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.777709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.777722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.777748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.792069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.792430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.792462] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.792489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.792542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.792570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.792585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.792598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.792872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.805947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.806066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.806097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.806115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.806140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.806165] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.806180] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.806193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.806218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.817026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.817269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.817300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.817318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.817428] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.817555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.817576] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.817605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.817758] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.827113] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.827362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.827391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.827408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.827434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.827459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.827481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.827495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.827745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.838080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.838289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.838320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.838338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.838522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.838596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.838617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.838631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.838684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.850386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.850620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.850652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.850682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.852996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.853867] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.853892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.853906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.854297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.860474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.860630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.860660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.860687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.860714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.860738] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.860753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.860767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.860791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.870651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.870820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.870850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.870867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.870893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.870917] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.870933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.870947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.871434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.883684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.884267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.884300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.884318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.884553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.884609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.884646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.884662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.884867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.894830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.895062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.895096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.895114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.897348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.897657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.897713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.897731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.898789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.904917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.905137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.905167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.905184] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.909372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.909568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.909593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.909608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.909725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.915166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.915392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.915423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.915440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.915625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.915703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.915727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.915742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.915767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.927279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.928004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.928037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.928066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.928290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.928499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.928538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.928553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.928603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.937368] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.937519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.937550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.937568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.937594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.940138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.940164] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.940194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.941109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.274 [2024-10-07 13:36:23.947741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.274 [2024-10-07 13:36:23.947924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.274 [2024-10-07 13:36:23.947964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.274 [2024-10-07 13:36:23.947982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.274 [2024-10-07 13:36:23.948008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.274 [2024-10-07 13:36:23.948036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.274 [2024-10-07 13:36:23.948052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.274 [2024-10-07 13:36:23.948065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.274 [2024-10-07 13:36:23.948090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:23.957828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:23.957976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:23.958006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:23.958024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:23.958208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:23.958281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:23.958303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:23.958317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:23.958341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:23.970925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:23.971562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:23.971593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:23.971610] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:23.971839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:23.972137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:23.972176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:23.972190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:23.972260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:23.981469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:23.981735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:23.981767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:23.981784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:23.981890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:23.982001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:23.982023] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:23.982038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:23.983043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:23.992079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:23.992257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:23.992288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:23.992305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:23.992331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:23.992355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:23.992371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:23.992385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:23.992409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.002163] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.002315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.002346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.002363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.002389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.002413] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.002428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.002442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.002466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.014889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.015288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.015320] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.015338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.015441] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.015477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.015494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.015507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.015700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.025104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.025246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.025276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.025293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.025727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.025885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.025910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.025924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.026045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.035202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.035366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.035397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.035414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.035682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.035818] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.035842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.035856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.035964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.045441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.045603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.045634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.045652] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.045683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.045709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.045724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.045738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.045769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.058275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.058514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.058545] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.058563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.058588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.058731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.058756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.058771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.058952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.070936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.071155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.071186] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.071204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.071313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.071424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.071447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.071461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.074484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.081616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.081818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.081849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.081866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.081892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.081916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.081931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.081945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.081969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.091713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.091873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.091903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.091925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.091952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.091989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.092007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.092021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.092046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.104281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.104482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.104512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.104530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.104724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.104784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.104805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.104819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.104844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.114986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.115236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.115266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.115284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.115394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.115521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.115543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.115557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.115687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.125082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.125276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.125306] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.125323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.125348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.125378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.125394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.125408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.125432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.137041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.137316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.137346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.137364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.137567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.137626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.137678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.137695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.137736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.275 [2024-10-07 13:36:24.149135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.275 [2024-10-07 13:36:24.151288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.275 [2024-10-07 13:36:24.151321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.275 [2024-10-07 13:36:24.151339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.275 [2024-10-07 13:36:24.152011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.275 [2024-10-07 13:36:24.152298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.275 [2024-10-07 13:36:24.152324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.275 [2024-10-07 13:36:24.152338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.275 [2024-10-07 13:36:24.152556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.159410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.159530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.159559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.159576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.159602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.160028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.160051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.160072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.160098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.169648] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.169823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.169854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.169871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.170056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.170127] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.170148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.170177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.170201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.183723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.184110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.184149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.184167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.184374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.184447] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.184467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.184481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.184506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.199528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.200174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.200206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.200224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.200604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.200719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.200741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.200755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.200937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.215498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.215643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.215682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.215708] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.216335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.216580] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.216604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.216619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.216678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.229382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.229500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.229531] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.229563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.229588] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.229628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.229643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.229656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.229691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.242046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.242288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.242318] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.242335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.242451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.242578] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.242601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.242615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.242808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.252132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.252309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.252339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.252356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.252381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.252405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.252426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.252441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.252466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.262215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.262410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.262441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.262459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.262484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.262508] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.262523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.262537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.262562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.277185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.277684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.277727] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.277745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.278186] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.278479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.278504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.278518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.278735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.288739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.288980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.289011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.289029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.289136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.289247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.289283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.289297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.289398] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.298828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.299000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.299030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.299048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.299073] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.299098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.299113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.299126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.299151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.308922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.309110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.309140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.309157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.309183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.309207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.309222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.309236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.309260] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.323112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.323406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.323437] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.323454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.323673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.323882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.323921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.323936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.323986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.336381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.336990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.337023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.337040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.337281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.337821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.337847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.337869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.338178] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.346470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.346688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.346719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.346736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.347500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.347703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.347728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.347743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.347850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.356553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.356705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.356735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.356752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.356777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.356802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.356818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.356831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.356855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.366636] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.366803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.366834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.366852] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.366877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.366901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.276 [2024-10-07 13:36:24.366917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.276 [2024-10-07 13:36:24.366940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.276 [2024-10-07 13:36:24.367431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.276 [2024-10-07 13:36:24.381617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.276 [2024-10-07 13:36:24.382221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.276 [2024-10-07 13:36:24.382259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.276 [2024-10-07 13:36:24.382277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.276 [2024-10-07 13:36:24.382341] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.276 [2024-10-07 13:36:24.382369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.382385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.382399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.382423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.392598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.392881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.392913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.392931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.393041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.393152] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.393189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.393203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.396852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.402708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.402861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.402891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.402908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.402933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.402957] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.402973] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.402986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.403019] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.412799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.413114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.413151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.413171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.413223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.413251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.413267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.413281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.413464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.426889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.427020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.427049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.427066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.427092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.427116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.427132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.427145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.427170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.443746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.443997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.444030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.444048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.444074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.444099] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.444114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.444127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.444152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.458282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.458427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.458457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.458475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.458501] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.458531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.458548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.458562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.458586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.474000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.474120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.474149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.474166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.474192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.474217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.474233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.474246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.474271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.489592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.489776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.489807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.489824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.489850] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.489875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.489891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.489904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.489929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.502912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.503141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.503172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.503190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.503447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.503605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.503630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.503645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.505879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.513283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.513500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.513530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.513548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.513876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.514250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.514274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.514288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.514349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.523370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.523559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.523590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.523606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.523631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.523657] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.523682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.523696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.523880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.533639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.533883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.533913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.533930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.537045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.537828] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.537853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.537868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.538301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.543750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.543875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.543903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.543926] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.543953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.543976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.543991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.544006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.544030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.553836] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.553984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.554014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.554031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.554057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.554082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.554098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.554111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.554136] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.567856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.568011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.568040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.568057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.568082] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.568106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.568121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.568135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.568160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.579754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.580011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.580044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.580062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.580171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.580309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.580334] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.580348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.580387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.592958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.593499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.593530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.593548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.593776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.593986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.594009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.594023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.594105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 8390.33 IOPS, 32.77 MiB/s [2024-10-07T11:36:37.989Z] [2024-10-07 13:36:24.610227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.610597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.610630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.610647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.610705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.610734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.610751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.610764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.610789] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.620564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.620724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.620754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.620772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.621743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.623688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.623714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.623728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.277 [2024-10-07 13:36:24.624264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.277 [2024-10-07 13:36:24.632553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.277 [2024-10-07 13:36:24.632839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.277 [2024-10-07 13:36:24.632872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.277 [2024-10-07 13:36:24.632890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.277 [2024-10-07 13:36:24.633010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.277 [2024-10-07 13:36:24.633136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.277 [2024-10-07 13:36:24.633157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.277 [2024-10-07 13:36:24.633171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.635426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.642640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.642791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.642821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.642838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.642863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.642887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.642903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.642916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.642941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.653458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.653663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.653701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.653718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.653903] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.653960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.653980] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.654010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.654035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.665228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.665510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.665542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.665566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.667064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.667734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.667759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.667773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.668029] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.675315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.675465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.675494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.675511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.675536] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.675560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.675575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.675589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.675613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.685542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.685730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.685760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.685778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.685803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.685827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.685844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.685857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.686042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.699646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.699861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.699892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.699909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.700108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.700193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.700235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.700249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.700291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.714179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.714415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.714447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.714464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.714490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.714514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.714529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.714542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.714566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.724265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.724482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.724512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.724529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.724555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.724579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.724593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.724606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.724630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.734347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.734546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.734576] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.734595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.734622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.734647] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.734663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.734686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.734711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.746585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.746778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.746809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.746826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.746853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.746881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.746897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.746910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.747388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.758541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.758787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.758820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.758838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.758948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.759059] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.759081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.759095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.762082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.768627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.768807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.768837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.768854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.768880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.768905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.768921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.768934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.768958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.778716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.779000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.779032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.779051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.779109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.779138] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.779154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.779167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.779350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.793152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.794112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.794145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.794163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.794555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.794804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.794831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.794845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.794897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.803548] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.803703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.803733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.803750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.803776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.803800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.803815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.803829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.803853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.813634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.813823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.813851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.813868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.813894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.813918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.813934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.813955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.813981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.826167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.826415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.826457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.826475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.826682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.826755] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.826776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.826790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.826815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.842491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.843096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.843129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.843147] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.843393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.843451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.843487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.843502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.278 [2024-10-07 13:36:24.843528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.278 [2024-10-07 13:36:24.852984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.278 [2024-10-07 13:36:24.854870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.278 [2024-10-07 13:36:24.854903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.278 [2024-10-07 13:36:24.854921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.278 [2024-10-07 13:36:24.857124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.278 [2024-10-07 13:36:24.857851] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.278 [2024-10-07 13:36:24.857876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.278 [2024-10-07 13:36:24.857890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.858330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.863233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.863446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.863479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.863498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.863524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.863547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.863563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.863577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.863601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.873333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.873467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.873495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.873513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.873537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.873577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.873593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.873606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.873801] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.887200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.887734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.887766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.887783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.888000] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.888208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.888233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.888248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.888313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.902521] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.903054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.903086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.903104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.903491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.903608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.903631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.903646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.903840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.917917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.918064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.918093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.918111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.918137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.918160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.918176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.918190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.918214] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.929814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.930049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.930081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.930098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.930207] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.930318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.930339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.930353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.930486] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.940438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.940833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.940866] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.940884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.940930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.940973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.940988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.941001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.941031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.950525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.950878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.950911] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.950929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.950980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.951166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.951189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.951203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.951255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.963770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.963938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.963967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.963984] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.964010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.964034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.964051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.964064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.964088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.976776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.977013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.977044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.977062] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.977169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.977282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.977304] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.977318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.977440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.987418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.987844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.987876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.987905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.987952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.987981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.987996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.988010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.988035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:24.997563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:24.997745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:24.997777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:24.997795] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:24.998107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:24.998186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:24.998206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:24.998219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:24.998262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.012130] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.012487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.012519] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.012537] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.012757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.012816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.012838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.012852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.012878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.026378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.026532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.026561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.026579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.026604] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.026630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.026651] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.026673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.026713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.036465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.036591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.036620] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.036637] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.036662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.036726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.036741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.036755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.039406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.046553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.046799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.046830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.046847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.047028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.047084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.047104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.047118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.047143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.058934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.059085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.059116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.059134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.059160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.059185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.059201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.059215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.059240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.073864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.073992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.074023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.074041] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.074067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.074091] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.074106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.074121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.074146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.087491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.089646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.089686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.089706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.090392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.090673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.090698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.090713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.090917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.097754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.097907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.097937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.097954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.098367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.279 [2024-10-07 13:36:25.098401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.279 [2024-10-07 13:36:25.098417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.279 [2024-10-07 13:36:25.098431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.279 [2024-10-07 13:36:25.098456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.279 [2024-10-07 13:36:25.107851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.279 [2024-10-07 13:36:25.108021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.279 [2024-10-07 13:36:25.108051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.279 [2024-10-07 13:36:25.108074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.279 [2024-10-07 13:36:25.108260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.108318] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.108338] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.108352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.108377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.121643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.121986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.122019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.122037] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.122258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.122316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.122336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.122350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.122375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.136552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.136688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.136720] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.136737] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.136762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.136786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.136802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.136815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.136840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.147421] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.147671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.147704] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.147722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.147848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.147965] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.147986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.148006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.148127] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.157510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.157645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.157697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.157721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.157746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.157784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.157803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.157817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.157841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.169229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.169438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.169469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.169487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.169694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.169773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.169795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.169809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.169835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.185129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.185495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.185528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.185545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.185792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.185852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.185874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.185888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.186071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.200241] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.200398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.200429] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.200447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.200473] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.200498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.200513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.200527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.200551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.210851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.213731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.213765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.213782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.215354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.215423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.215443] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.215457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.215483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.222289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.222442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.222472] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.222490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.222515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.222540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.222555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.222569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.222594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.232493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.232703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.232734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.232752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.232942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.233015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.233037] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.233051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.233075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.245549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.245873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.245906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.245924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.245975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.246004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.246019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.246032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.246057] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.257097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.257396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.257428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.257446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.257556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.257676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.257708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.257726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.257842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.267185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.267456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.267487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.267505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.267531] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.267556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.267572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.267592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.267618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.277288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.277472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.277502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.277520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.277716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.277805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.277827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.277841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.277867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.289824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.290283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.290315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.290332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.290537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.290608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.290629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.290642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.290692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.304310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.304771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.304804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.304822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.304885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.304914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.304930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.304958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.304986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.314414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.317080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.317118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.317137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.318411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.318699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.318724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.318738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.318859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.324499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.324682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.324711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.324728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.324754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.324778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.324793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.280 [2024-10-07 13:36:25.324806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.280 [2024-10-07 13:36:25.324831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.280 [2024-10-07 13:36:25.335947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.280 [2024-10-07 13:36:25.336257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.280 [2024-10-07 13:36:25.336290] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.280 [2024-10-07 13:36:25.336308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.280 [2024-10-07 13:36:25.336361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.280 [2024-10-07 13:36:25.336546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.280 [2024-10-07 13:36:25.336569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.336583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.336650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.351717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.351896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.351926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.351943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.351970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.352000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.352016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.352030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.352055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.364748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.365119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.365152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.365170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.365376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.365441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.365462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.365491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.365517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.375715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.375973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.376005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.376023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.376131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.376242] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.376263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.376277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.380475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.385805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.385972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.386003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.386020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.386045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.386070] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.386085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.386098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.386129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.396152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.396474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.396506] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.396523] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.396575] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.396603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.396619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.396632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.396657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.408695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.409414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.409446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.409464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.409708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.410253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.410278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.410292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.410515] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.419341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.419536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.419567] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.419585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.422140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.423036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.423062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.423092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.423448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.429586] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.429741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.429771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.429794] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.429821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.429845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.429860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.429873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.429897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.439681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.439861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.439891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.439909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.439935] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.439959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.439974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.439988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.440012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.452672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.452898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.452930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.452948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.453132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.453191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.453213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.453243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.453268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.463195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.463349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.463379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.463397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.465968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.466879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.466929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.466944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.467298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.473466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.473730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.473760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.473778] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.473803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.473827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.473843] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.473856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.473881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.483831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.484023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.484054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.484072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.484097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.484122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.484137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.484150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.484174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.494208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.494465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.494497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.494515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.494622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.495992] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.496018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.496032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.497285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.504303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.504501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.504530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.504547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.504572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.504596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.504611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.504625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.504649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.514386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.514542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.514572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.514590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.514615] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.514639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.514655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.514679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.514707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.527402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.529466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.529499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.529517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.529613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.529642] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.529658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.529682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.529709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.537485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.537620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.537649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.537687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.537721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.537746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.537762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.537775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.537799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.547571] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.547723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.547754] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.547772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.547798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.547822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.547837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.547851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.547875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.560946] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.561298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.281 [2024-10-07 13:36:25.561329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.281 [2024-10-07 13:36:25.561348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.281 [2024-10-07 13:36:25.561552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.281 [2024-10-07 13:36:25.561610] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.281 [2024-10-07 13:36:25.561631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.281 [2024-10-07 13:36:25.561645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.281 [2024-10-07 13:36:25.561680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.281 [2024-10-07 13:36:25.576144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.281 [2024-10-07 13:36:25.576262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.576293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.576311] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.576337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.576360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.576376] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.576399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.576425] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.592191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.593051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.593083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.593100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.593344] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.593569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.593593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.593608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.593836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.602277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.602437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.602467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.602484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.605162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.607980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.608008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.608022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.611489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 8412.75 IOPS, 32.86 MiB/s [2024-10-07T11:36:37.994Z] [2024-10-07 13:36:25.612447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.612664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.612698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.612716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.612741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.612765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.612780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.612794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.612818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.625127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.625276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.625307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.625325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.625350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.625374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.625390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.625403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.625427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.636227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.636496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.636527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.636545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.636653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.636790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.636812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.636826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.636930] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.646330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.646453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.646482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.646499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.646524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.646547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.646562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.646574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.646598] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.656414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.656534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.656565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.656583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.656798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.656872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.656894] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.656908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.656934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.670263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.670411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.670441] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.670459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.670485] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.670509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.670525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.670538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.670563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.686426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.686701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.686733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.686751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.686777] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.686802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.686817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.686830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.687329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.702019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.702613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.702645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.702662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.702889] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.702964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.702985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.703027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.703055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.712380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.714259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.714291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.714309] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.716446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.717197] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.717221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.717234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.717642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.722572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.722765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.722796] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.722813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.722839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.722864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.722879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.722892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.722917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.732657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.732843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.732875] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.732893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.733130] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.733205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.733240] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.733256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.733441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.748244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.748610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.748646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.748673] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.748885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.748943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.748965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.748979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.749005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.762573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.762983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.763016] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.763034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.763088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.763274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.763298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.763315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.763367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.778087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.778608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.778640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.778657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.778884] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.778943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.778964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.778978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.779160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.793203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.793325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.793355] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.793372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.793397] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.793428] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.793445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.793458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.793483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.803856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.806731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.806763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.806782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.282 [2024-10-07 13:36:25.808363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.282 [2024-10-07 13:36:25.808432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.282 [2024-10-07 13:36:25.808451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.282 [2024-10-07 13:36:25.808465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.282 [2024-10-07 13:36:25.808490] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.282 [2024-10-07 13:36:25.813944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.282 [2024-10-07 13:36:25.814114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.282 [2024-10-07 13:36:25.814145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.282 [2024-10-07 13:36:25.814162] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.814187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.814211] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.814227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.814240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.814264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.824296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.824519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.824549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.824566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.824762] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.824820] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.824842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.824856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.824887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.839726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.840108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.840151] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.840168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.840400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.840471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.840493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.840507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.840533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.850075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.850365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.850396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.850415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.853781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.854639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.854671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.854688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.855069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.860158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.860288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.860316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.860333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.860357] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.860381] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.860396] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.860409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.860433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.871920] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.872153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.872184] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.872207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.872410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.872482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.872517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.872532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.872575] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.883951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.886068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.886101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.886119] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.886817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.887079] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.887103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.887118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.887336] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.894040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.894185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.894215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.894233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.894674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.894722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.894737] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.894751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.894791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.904426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.904621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.904652] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.904677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.904706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.904731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.904753] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.904768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.904793] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.917301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.917459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.917490] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.917508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.917533] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.917557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.917573] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.917587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.917611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.932561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.933681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.933713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.933731] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.934177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.934490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.934516] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.934531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.934602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.947195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.947325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.947356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.947374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.947400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.947424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.947439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.947453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.947478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.962657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.962866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.962898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.962917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.962943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.962968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.962984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.962998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.963023] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.976208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.978349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.978381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.978406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.979070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.979357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.979383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.979397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.979613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.986297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.986847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.986879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.986898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.986925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.986949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.986965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.986979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.987003] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:25.996385] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:25.996533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:25.996563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:25.996581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:25.996782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:25.996841] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:25.996863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:25.996878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:25.996903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:26.009966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:26.010114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:26.010146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:26.010164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:26.010190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:26.010215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:26.010230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:26.010243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:26.010269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:26.024809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:26.025009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:26.025041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:26.025059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:26.025085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:26.025110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:26.025126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:26.025139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:26.025164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:26.038583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:26.040073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:26.040106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:26.040123] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:26.040610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:26.040739] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:26.040763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:26.040783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:26.040809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:26.048848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:26.049000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:26.049031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:26.049049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:26.049074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:26.049098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:26.049114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:26.049128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.283 [2024-10-07 13:36:26.049152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.283 [2024-10-07 13:36:26.058935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.283 [2024-10-07 13:36:26.059221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.283 [2024-10-07 13:36:26.059253] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.283 [2024-10-07 13:36:26.059270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.283 [2024-10-07 13:36:26.059322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.283 [2024-10-07 13:36:26.059350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.283 [2024-10-07 13:36:26.059365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.283 [2024-10-07 13:36:26.059379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.059405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.072796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.072939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.072970] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.072988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.073013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.073050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.073069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.073083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.073108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.087728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.087879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.087907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.087924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.087953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.087977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.087992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.088006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.088031] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.103863] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.104076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.104107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.104125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.104151] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.104201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.104224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.104238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.104263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.113968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.114103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.114149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.114166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.114192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.114224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.114241] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.114255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.114280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.124684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.124820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.124851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.124868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.124899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.124925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.124940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.124954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.124979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.136077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.136408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.136442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.136461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.136512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.136541] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.136556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.136569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.136594] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.148974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.149212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.149244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.149262] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.149371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.151532] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.151560] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.151575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.152394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.159064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.159219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.159248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.159265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.159290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.159314] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.159330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.159350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.159376] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.169157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.169360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.169391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.169408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.169434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.169617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.169642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.169662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.169726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.184228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.184659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.184699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.184720] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.184925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.184998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.185019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.185033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.185058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.199552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.199673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.199703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.199721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.199747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.199771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.199786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.199800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.199825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.209638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.209856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.209891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.209909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.209934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.209967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.209982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.209996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.210021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.219797] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.220007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.220050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.220068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.220094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.220118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.220134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.220147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.220172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.232597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.232787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.232819] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.232836] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.232862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.232887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.232902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.232916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.232940] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.248176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.248299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.248330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.248348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.248374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.248405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.248421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.248434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.248459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.262659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.263411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.263443] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.263461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.263709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.263920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.263944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.263969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.264036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.278177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.278871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.278904] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.278931] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.279306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.279377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.279414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.279428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.279492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.288266] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.288428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.288467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.288484] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.288510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.288533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.288549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.288562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.288593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.299803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.299988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.300020] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.300038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.300064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.300088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.300104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.284 [2024-10-07 13:36:26.300117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.284 [2024-10-07 13:36:26.300143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.284 [2024-10-07 13:36:26.311676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.284 [2024-10-07 13:36:26.311998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.284 [2024-10-07 13:36:26.312031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.284 [2024-10-07 13:36:26.312048] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.284 [2024-10-07 13:36:26.312535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.284 [2024-10-07 13:36:26.312775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.284 [2024-10-07 13:36:26.312800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.312815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.312867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.322112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.322325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.322357] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.322375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.326697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.326778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.326800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.326815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.326841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.332393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.334620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.334653] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.334692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.335385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.335416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.335432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.335444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.335470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.342496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.342826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.342858] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.342876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.342939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.342968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.342994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.343007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.343190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.357708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.358009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.358040] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.358059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.358109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.358148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.358163] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.358176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.358201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.372696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.372822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.372855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.372872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.372898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.372923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.372943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.372958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.372994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.383348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.386247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.386279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.386300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.387880] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.387961] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.387982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.387996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.388022] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.393549] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.393711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.393742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.393760] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.393785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.393809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.393825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.393838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.393863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.403637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.285 [2024-10-07 13:36:26.403860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.285 [2024-10-07 13:36:26.403890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.285 [2024-10-07 13:36:26.403908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.285 [2024-10-07 13:36:26.404092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.285 [2024-10-07 13:36:26.404164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.285 [2024-10-07 13:36:26.404186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.285 [2024-10-07 13:36:26.404200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.285 [2024-10-07 13:36:26.404224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.285 [2024-10-07 13:36:26.412178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.285 [2024-10-07 13:36:26.412643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.412960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.412975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.285 [2024-10-07 13:36:26.413587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.285 [2024-10-07 13:36:26.413600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.413994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:60520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:60544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:60568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.414878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.414971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.414987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.415012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.415041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.415071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.415101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.286 [2024-10-07 13:36:26.415130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.415969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.415985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.416014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.416059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.416087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.416116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:56.286 [2024-10-07 13:36:26.416144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.286 [2024-10-07 13:36:26.416180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.286 [2024-10-07 13:36:26.416201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60960 len:8 PRP1 0x0 PRP2 0x0 00:25:56.287 [2024-10-07 13:36:26.416216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:56.287 [2024-10-07 13:36:26.416244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:56.287 [2024-10-07 13:36:26.416255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60968 len:8 PRP1 0x0 PRP2 0x0 00:25:56.287 [2024-10-07 13:36:26.416267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416326] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d125a0 was disconnected and freed. reset controller. 00:25:56.287 [2024-10-07 13:36:26.416394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.287 [2024-10-07 13:36:26.416431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.287 [2024-10-07 13:36:26.416460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.287 [2024-10-07 13:36:26.416487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:56.287 [2024-10-07 13:36:26.416514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:56.287 [2024-10-07 13:36:26.416527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.417765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.417796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.417826] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.417942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.417980] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.417996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.418107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.418132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.418148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.418173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.418193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.418214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.418233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.418247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.418264] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.418283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.418296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.418322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.418339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.427914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.428132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.428262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.428291] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.428308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.428463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.428489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.428505] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.428524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.428550] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.428569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.428582] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.428595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.428620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.428638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.428650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.428664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.428714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.439481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.439531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.441627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.441661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.441692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.441815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.441840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.441856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.442931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.442979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.443384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.443410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.443441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.443459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.443474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.443503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.443742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.443766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.450818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.450852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.451076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.451108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.451125] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.451239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.451265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.451281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.451390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.451417] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.451533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.451569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.451583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.451601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.451615] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.451645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.451763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.451794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.461239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.461287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.461455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.461484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.461502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.461651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.461685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.461704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.461731] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.461753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.461774] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.461789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.461802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.461819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.461833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.461846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.461870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.461886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.472379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.472413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.472757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.472791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.472809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.472924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.472950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.472966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.473171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.473199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.473248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.473268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.473288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.473306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.473320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.473333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.473530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.473552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.488034] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.488067] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.488439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.488471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.488489] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.488596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.488622] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.488639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.489033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.489079] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.489153] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.489174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.489189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.489207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.489222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.489234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.489259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.489276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.503403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.503436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.503546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.503575] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.503592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.503736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.503768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.503785] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.503812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.503834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.503855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.503870] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.503884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.503901] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.287 [2024-10-07 13:36:26.503915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.287 [2024-10-07 13:36:26.503928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.287 [2024-10-07 13:36:26.503953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.503970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.287 [2024-10-07 13:36:26.519331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.519365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.287 [2024-10-07 13:36:26.519574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.519604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.287 [2024-10-07 13:36:26.519621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.519736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.287 [2024-10-07 13:36:26.519764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.287 [2024-10-07 13:36:26.519780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.287 [2024-10-07 13:36:26.519806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.287 [2024-10-07 13:36:26.519827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.519849] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.519863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.519877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.519894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.519909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.519923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.519947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.519964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.532525] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.532559] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.532804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.532834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.532851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.532937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.532963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.532979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.533086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.533114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.535295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.535321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.535336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.535354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.535369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.535382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.536228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.536253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.542828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.542861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.543008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.543036] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.543053] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.543183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.543209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.543224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.543609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.543640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.543700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.543723] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.543741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.543760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.543776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.543788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.543824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.543842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.552943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.553175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.553366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.553397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.553415] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.553535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.553562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.553579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.553598] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.553795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.553836] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.553851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.553864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.553928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.553950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.553963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.553978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.554002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.563454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.563587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.563795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.563827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.563844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.564018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.564046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.564068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.564087] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.564198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.564221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.564234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.564248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.566952] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.566981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.566995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.567009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.568032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.573540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.573736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.573765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.573782] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.573807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.573845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.573864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.573879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.573905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.573925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.574101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.574128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.574144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.574169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.574194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.574209] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.574223] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.574248] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.583622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.583827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.583857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.583874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.583900] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.584025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.584049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.584063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.584416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.584484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.584595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.584623] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.584640] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.584676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.584704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.584720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.584733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.584757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.597451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.597486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.597784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.597815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.597833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.597955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.597982] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.597998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.598202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.598231] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.598279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.598298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.598312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.598335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.598351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.598364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.598546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.598570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 8379.60 IOPS, 32.73 MiB/s [2024-10-07T11:36:38.000Z] [2024-10-07 13:36:26.612630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.612660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.612787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.612815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.612831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.612938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.612963] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.612979] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.613006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.613027] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.613062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.613082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.613095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.613112] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.613127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.613139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.613164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.613180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.625024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.625058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.625198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.625227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.625243] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.625351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.625377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.625393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.625425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.625447] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.625469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.625484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.625497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.625514] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.625528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.625541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.625566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.625583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.288 [2024-10-07 13:36:26.638449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.638482] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.288 [2024-10-07 13:36:26.638707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.638737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.288 [2024-10-07 13:36:26.638754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.638872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.288 [2024-10-07 13:36:26.638898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.288 [2024-10-07 13:36:26.638914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.288 [2024-10-07 13:36:26.639023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.639050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.288 [2024-10-07 13:36:26.639179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.288 [2024-10-07 13:36:26.639199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.288 [2024-10-07 13:36:26.639213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.288 [2024-10-07 13:36:26.639230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.639244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.639256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.642480] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.642509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.648563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.648608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.648779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.648808] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.648825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.648948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.648974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.648991] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.649010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.649036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.649054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.649067] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.649081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.649105] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.649122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.649135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.649149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.649171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.658649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.658807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.658837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.658854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.659052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.659145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.659180] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.659196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.659210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.659234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.659354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.659381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.659397] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.659582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.659658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.659702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.659717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.659744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.673954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.674002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.674572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.674604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.674621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.674711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.674738] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.674755] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.674972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.675001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.675048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.675069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.675082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.675100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.675114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.675127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.675152] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.675168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.689348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.689380] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.689738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.689771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.689789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.689876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.689902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.689918] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.690129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.690158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.690360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.690386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.690400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.690418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.690432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.690446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.690687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.690710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.704469] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.704517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.704658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.704697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.704714] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.704805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.704831] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.704848] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.704873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.704894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.704915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.704931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.704960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.704977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.704991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.705003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.705042] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.705058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.720352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.720386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.720606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.720642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.720660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.720754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.720780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.720797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.720822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.720844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.720865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.720880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.720894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.720911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.720926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.720939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.720964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.720981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.736832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.736865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.737054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.737082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.737100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.737210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.737236] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.737253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.737591] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.737638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.738020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.738046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.738076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.738094] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.738108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.738125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.738198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.738219] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.752372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.752405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.752938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.752968] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.752985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.753097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.753123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.753138] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.753356] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.753385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.753433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.753453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.753467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.753484] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.753498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.753511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.753712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.753736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.762892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.762927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.764691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.764724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.289 [2024-10-07 13:36:26.764741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.764864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.764890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.764906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.289 [2024-10-07 13:36:26.767057] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.767094] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.289 [2024-10-07 13:36:26.767969] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.767994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.768008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.768024] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.289 [2024-10-07 13:36:26.768038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.289 [2024-10-07 13:36:26.768050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.289 [2024-10-07 13:36:26.768316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.768340] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.289 [2024-10-07 13:36:26.773012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.773042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.289 [2024-10-07 13:36:26.773249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.289 [2024-10-07 13:36:26.773276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.289 [2024-10-07 13:36:26.773294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.773375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.773401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.773418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.773443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.773465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.773486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.773501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.773514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.773547] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.773561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.773574] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.773597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.773628] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.783124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.783173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.783333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.783362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.783390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.783721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.783753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.783770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.783790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.784036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.784065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.784079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.784107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.784177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.784199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.784213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.784226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.784250] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.794617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.794650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.794858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.794888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.794905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.794990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.795017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.795033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.795155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.795181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.797356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.797385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.797399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.797417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.797432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.797445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.798271] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.798296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.804739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.804786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.804936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.804964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.804981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.805068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.805094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.805110] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.805129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.805155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.805174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.805187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.805199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.805225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.805243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.805256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.805268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.805291] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.814882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.814932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.815063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.815092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.815109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.815469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.815500] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.815517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.815537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.815589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.815617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.815631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.815644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.815836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.815861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.815875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.815904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.815970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.828625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.828681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.829071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.829102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.829120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.829257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.829282] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.829298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.829507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.829535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.829740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.829764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.829778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.829796] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.829810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.829823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.829869] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.829890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.842605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.842653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.843283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.843315] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.843333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.843453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.843479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.843495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.843727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.843756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.843819] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.843840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.843854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.843871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.843886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.843899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.843924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.843942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.854907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.854942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.855156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.855185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.855202] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.855308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.855334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.855350] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.855376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.855398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.855419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.855435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.855449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.855466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.855480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.855493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.855518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.855541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.867297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.867331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.867561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.867590] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.867607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.867726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.867753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.867769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.867878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.867904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.868021] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.868042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.868070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.868087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.868101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.868113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.871558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.871586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.877410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.877455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.877634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.877662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.877688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.877811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.877837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.877854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.877872] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.878162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.878204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.878223] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.878236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.878383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.878406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.290 [2024-10-07 13:36:26.878421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.290 [2024-10-07 13:36:26.878436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.290 [2024-10-07 13:36:26.878546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.290 [2024-10-07 13:36:26.888024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.888058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.290 [2024-10-07 13:36:26.888221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.888250] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.290 [2024-10-07 13:36:26.888267] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.888349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.290 [2024-10-07 13:36:26.888376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.290 [2024-10-07 13:36:26.888391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.290 [2024-10-07 13:36:26.888689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.888733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.290 [2024-10-07 13:36:26.888941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.888967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.888982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.889000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.889015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.889028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.889094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.889114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.900049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.900083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.902984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.903017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.903034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.903117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.903143] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.903160] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.903662] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.903717] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.903958] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.903983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.903998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.904017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.904031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.904045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.904297] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.904322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.910161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.910205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.910384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.910411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.910428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.910632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.910661] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.910687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.910707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.910830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.910854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.910869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.910882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.910989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.911011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.911024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.911038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.911148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.920243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.920424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.920453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.920470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.920508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.920540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.920569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.920586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.920599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.920623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.920747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.920774] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.920790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.920816] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.920840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.920856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.920869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.920893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.932280] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.932314] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.932482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.932511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.932529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.932614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.932640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.932655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.932688] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.932712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.932733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.932748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.932767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.932785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.932800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.932813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.932837] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.932854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.945812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.945846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.946050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.946079] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.946097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.946199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.946226] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.946242] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.946354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.946388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.946521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.946542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.946555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.946572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.946585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.946598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.949998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.950027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.955930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.955990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.956190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.956217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.956234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.956322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.956348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.956370] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.956390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.956416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.956434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.956447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.956460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.956500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.956517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.956529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.956541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.956577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.966031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.966202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.966231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.966248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.966446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.966523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.966574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.966590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.966605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.966630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.966766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.966793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.966810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.967267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.967521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.967547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.967562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.967614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.979244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.979283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.979909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.979941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.979959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.980065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.980091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.980107] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.980326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.980355] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.980579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.980603] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.980617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.980635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.980649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.980663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.980911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.980935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.990752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.990785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:26.991087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.991118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.291 [2024-10-07 13:36:26.991135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.991270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:26.991295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:26.991312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:26.991432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.991460] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.291 [2024-10-07 13:36:26.991561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.991583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.991602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.991635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.291 [2024-10-07 13:36:26.991649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.291 [2024-10-07 13:36:26.991662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.291 [2024-10-07 13:36:26.992873] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:26.992898] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.291 [2024-10-07 13:36:27.000882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:27.000930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.291 [2024-10-07 13:36:27.001149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:27.001177] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.291 [2024-10-07 13:36:27.001194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.291 [2024-10-07 13:36:27.001313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.291 [2024-10-07 13:36:27.001339] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.001355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.001374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.001400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.001420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.001435] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.001448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.001474] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.001491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.001505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.001534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.001556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.010984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.011151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.011181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.011198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.011458] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.011538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.011586] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.011608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.011624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.011649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.011770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.011797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.011814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.011998] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.012072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.012093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.012122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.012149] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.024679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.024713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.025322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.025354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.025372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.025478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.025504] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.025520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.025751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.025780] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.025990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.026014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.026028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.026046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.026061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.026074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.026306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.026329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.035912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.035951] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.036213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.036244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.036261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.036372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.036398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.036414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.038617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.038650] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.039044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.039086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.039100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.039118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.039133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.039146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.039826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.039851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.047782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.047816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.048207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.048239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.048256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.048361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.048387] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.048402] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.048869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.048901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.049038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.049061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.049075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.049098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.049115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.049128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.049235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.049273] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.057896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.057943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.058144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.058172] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.058189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.058312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.058338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.058354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.058373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.058400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.058418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.058431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.058445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.058471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.058488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.058501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.058514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.058537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.068212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.068244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.068360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.068389] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.068406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.068517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.068543] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.068559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.069075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.069105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.069390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.069416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.069430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.069448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.069463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.069476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.069720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.069745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.078401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.080713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.080827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.080855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.080872] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.081893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.081923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.081940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.081958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.082215] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.082241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.082256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.082285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.082581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.082606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.082620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.082634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.082744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.088485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.088718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.088747] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.088770] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.088797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.088838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.088858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.088872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.088897] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.091749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.091878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.091906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.091923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.091949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.092366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.092407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.092422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.092909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.098690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.098868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.098897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.098914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.099099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.099156] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.099177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.099191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.099217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.101850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.101987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.102014] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.102031] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.102251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.102495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.102520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.102535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.102655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.110592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.110777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.110807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.292 [2024-10-07 13:36:27.110824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.111334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.292 [2024-10-07 13:36:27.111599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.292 [2024-10-07 13:36:27.111624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.292 [2024-10-07 13:36:27.111639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.292 [2024-10-07 13:36:27.111700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.292 [2024-10-07 13:36:27.112019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.292 [2024-10-07 13:36:27.112176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.292 [2024-10-07 13:36:27.112206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.292 [2024-10-07 13:36:27.112224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.292 [2024-10-07 13:36:27.112250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.112274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.112289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.112302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.112586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.120761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.120884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.120913] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.120930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.122018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.122238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.122262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.122276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.122394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.122513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.122767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.122798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.122815] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.123622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.125505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.125531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.125546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.126132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.130845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.131030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.131059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.131075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.131101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.131126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.131141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.131155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.131179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.132606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.132808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.132836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.132853] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.132879] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.132903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.132917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.132931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.132955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.141313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.141458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.141486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.141508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.141535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.141740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.141763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.141778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.141831] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.142852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.142961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.142989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.143005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.143030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.143228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.143251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.143265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.143331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.154889] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.155162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.155369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.155399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.155417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.156032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.156063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.156081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.156101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.156398] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.156441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.156455] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.156470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.156715] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.156741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.156762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.156777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.156846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.165627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.165660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.165981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.166012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.166030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.166144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.166171] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.166187] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.166294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.166322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.166453] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.166474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.166487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.166518] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.166533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.166545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.166677] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.166699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.175892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.175924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.176120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.176150] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.176167] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.176281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.176309] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.176325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.176350] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.176371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.176398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.176415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.176427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.176445] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.176459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.176472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.176496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.176513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.186033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.186066] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.186232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.186262] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.186279] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.186386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.186413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.186430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.186455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.186476] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.186497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.186512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.186525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.186542] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.186556] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.186569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.186593] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.186610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.196418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.196451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.196682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.196717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.196749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.196845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.196873] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.196889] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.197724] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.197753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.199469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.199496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.199510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.199527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.199541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.199554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.200146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.200172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.206595] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.206625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.206821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.206850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.293 [2024-10-07 13:36:27.206868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.206980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.293 [2024-10-07 13:36:27.207007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.293 [2024-10-07 13:36:27.207023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.293 [2024-10-07 13:36:27.207048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.207069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.293 [2024-10-07 13:36:27.207090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.207105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.207118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.207135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.293 [2024-10-07 13:36:27.207150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.293 [2024-10-07 13:36:27.207162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.293 [2024-10-07 13:36:27.207192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.207225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.293 [2024-10-07 13:36:27.216721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.293 [2024-10-07 13:36:27.216770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.216930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.216960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.216977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.217084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.217112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.217128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.217147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.217173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.217191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.217204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.217217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.217242] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.217258] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.217271] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.217284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.217322] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.231004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.231036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.231424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.231456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.231473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.231584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.231610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.231625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.231877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.231907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.232161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.232187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.232201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.232218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.232233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.232246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.232450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.232475] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.245500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.246502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.246679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.246710] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.246727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.246904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.246934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.246951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.246970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.247007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.247028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.247042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.247055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.247080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.247098] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.247111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.247124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.247146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.260638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.260696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.261310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.261341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.261358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.261474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.261499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.261515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.261811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.261856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.262076] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.262100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.262115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.262132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.262147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.262159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.262225] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.262245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.276811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.276847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.277568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.277599] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.277616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.277726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.277753] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.277769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.277997] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.278028] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.278522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.278546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.278559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.278576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.278590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.278602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.278856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.278887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.289102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.289136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.289331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.289362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.289380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.289492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.289521] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.289538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.291378] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.291410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.292238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.292262] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.292276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.292292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.292306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.292319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.292766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.292808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.299215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.299277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.299458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.299487] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.299504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.299737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.299767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.299784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.299803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.299925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.299950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.299970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.299984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.300097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.300121] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.300134] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.300148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.300255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.309315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.309459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.309489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.309507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.309716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.309810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.309845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.309863] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.309876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.309916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.310037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.310065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.310082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.310285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.310356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.310393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.310407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.310433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.324192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.324224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.324541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.324572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.324605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.324764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.324798] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.324816] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.325020] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.325050] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.325250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.325274] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.325289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.325306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.325321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.325333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.325562] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.325587] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.339634] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.339677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.339835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.339865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.339882] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.339960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.339990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.340006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.340607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.340637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.294 [2024-10-07 13:36:27.340896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.340921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.340935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.340952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.294 [2024-10-07 13:36:27.340967] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.294 [2024-10-07 13:36:27.340980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.294 [2024-10-07 13:36:27.341212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.341237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.294 [2024-10-07 13:36:27.354456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.354505] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.294 [2024-10-07 13:36:27.354891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.354923] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.294 [2024-10-07 13:36:27.354940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.294 [2024-10-07 13:36:27.355052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.294 [2024-10-07 13:36:27.355078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.294 [2024-10-07 13:36:27.355094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.355299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.355329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.355529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.355553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.355567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.355584] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.355599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.355612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.355662] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.355692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.369383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.369416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.369637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.369683] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.369703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.369794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.369821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.369838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.369864] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.369886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.369907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.369923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.369945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.369975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.369990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.370003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.370028] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.370044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.384510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.384545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.385423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.385454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.385472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.385581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.385607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.385623] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.386023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.386052] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.386125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.386144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.386173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.386191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.386206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.386219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.386463] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.386488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.401340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.401374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.401897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.401929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.401946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.402057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.402084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.402105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.402324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.402354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.402402] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.402423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.402437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.402455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.402469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.402481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.402699] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.402724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.413364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.413398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.415580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.415612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.415635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.415767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.415793] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.415810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.416490] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.416520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.416993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.417018] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.417046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.417066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.417080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.417091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.417169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.417189] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.425788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.425962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.426105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.426135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.426152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.426378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.426407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.426423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.426442] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.426552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.426577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.426590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.426603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.429601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.429628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.429642] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.429683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.431396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.435880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.436056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.436085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.436102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.436127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.436164] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.436183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.436197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.436224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.436244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.436399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.436427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.436443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.436759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.436929] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.436965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.436980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.437090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.446832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.446866] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.447072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.447107] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.447135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.447236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.447264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.447280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.447480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.447509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.447758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.447783] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.447798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.447816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.447831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.447844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.447894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.447915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.461427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.461461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.461766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.461797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.461814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.461928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.461955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.461982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.462192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.462221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.462422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.462445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.462460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.462478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.462492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.462505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.462568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.462588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.477069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.477102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.477215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.477255] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.477273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.477381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.477407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.477423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.477448] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.477470] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.477491] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.477506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.477520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.477537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.477551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.477564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.477589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.477620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.295 [2024-10-07 13:36:27.494052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.494084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.295 [2024-10-07 13:36:27.494310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.494340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.295 [2024-10-07 13:36:27.494357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.494466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.295 [2024-10-07 13:36:27.494493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.295 [2024-10-07 13:36:27.494509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.295 [2024-10-07 13:36:27.494904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.494933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.295 [2024-10-07 13:36:27.495171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.295 [2024-10-07 13:36:27.495196] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.295 [2024-10-07 13:36:27.495211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.295 [2024-10-07 13:36:27.495228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.495243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.495255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.495748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.495773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.509786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.509819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.509953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.509988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.510005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.510086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.510113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.510129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.510600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.510628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.510915] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.510940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.510955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.510972] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.511019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.511033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.511270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.511295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.525108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.525156] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.525294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.525324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.525341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.525452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.525479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.525495] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.525978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.526007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.526285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.526311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.526325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.526342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.526374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.526387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.526622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.526647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.537134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.537167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.537402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.537432] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.537450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.537562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.537589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.537605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.537728] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.537769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.539916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.539942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.539964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.539981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.539996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.540008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.540865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.540891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.547252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.547297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.547478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.547507] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.547525] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.547639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.547675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.547694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.547713] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.547739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.547758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.547771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.547784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.547818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.547837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.547851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.547863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.547886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.557335] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.557675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.557707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.557730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.557796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.557832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.557862] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.557878] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.557891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.558088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.558266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.558293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.558310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.558360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.558389] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.558405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.558418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.558443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.571898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.571931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.572050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.572081] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.572099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.572183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.572209] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.572225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.572250] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.572271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.572293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.572308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.572321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.572338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.572352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.572371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.572397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.572429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.587164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.587197] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.587978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.588010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.588027] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.588140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.588166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.588183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.588270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.588297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.588319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.588335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.588349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.588366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.588380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.588393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.588417] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.588433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.601498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.601533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.601985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.602017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.602034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.602133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.602164] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.602180] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.602385] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.602421] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.602903] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.602938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.602952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.602983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.603004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.603016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.603263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.603296] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.613279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.613312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.613550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.613581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.613598] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.613708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.613736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.613753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 8398.17 IOPS, 32.81 MiB/s [2024-10-07T11:36:38.008Z] [2024-10-07 13:36:27.615456] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.615483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.296 [2024-10-07 13:36:27.615596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.615618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.615633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.615651] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.296 [2024-10-07 13:36:27.615671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.296 [2024-10-07 13:36:27.615686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.296 [2024-10-07 13:36:27.618688] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.618717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.296 [2024-10-07 13:36:27.623753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.623785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.296 [2024-10-07 13:36:27.623927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.623966] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.296 [2024-10-07 13:36:27.623989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.624106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.296 [2024-10-07 13:36:27.624133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.296 [2024-10-07 13:36:27.624149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.296 [2024-10-07 13:36:27.624174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.624196] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.624217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.624232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.624245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.624262] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.624276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.624289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.624314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.624330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.634039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.634071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.634218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.634248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.634265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.634374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.634401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.634417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.634602] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.634647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.634882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.634907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.634922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.634940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.634955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.634973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.635040] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.635061] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.648371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.648404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.648904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.648935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.648952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.649053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.649080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.649096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.649313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.649343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.649543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.649567] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.649581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.649598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.649613] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.649626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.649702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.649738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.663971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.664003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.664152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.664182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.664199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.664304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.664331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.664347] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.664373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.664395] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.664422] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.664438] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.664451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.664468] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.664482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.664495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.664519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.664535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.676727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.676761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.676977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.677007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.677024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.677112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.677141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.677157] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.677280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.677307] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.677419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.677440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.677453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.677469] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.677483] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.677495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.680692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.680719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.687071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.687116] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.687334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.687363] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.687385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.687491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.687518] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.687534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.687560] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.687581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.687602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.687618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.687631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.687648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.687663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.687687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.687713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.687730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.698277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.698309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.698964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.699006] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.699023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.699126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.699153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.699170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.699410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.699440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.699555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.699579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.699593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.699611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.699625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.699637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.699700] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.699722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.710598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.710631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.710749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.710779] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.710796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.710908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.710935] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.710950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.711217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.711245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.711478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.711502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.711517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.711534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.297 [2024-10-07 13:36:27.711548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.297 [2024-10-07 13:36:27.711561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.297 [2024-10-07 13:36:27.711612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.711633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.297 [2024-10-07 13:36:27.721712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.721746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.297 [2024-10-07 13:36:27.721963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.721993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.297 [2024-10-07 13:36:27.722010] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.722085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.297 [2024-10-07 13:36:27.722111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.297 [2024-10-07 13:36:27.722127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.297 [2024-10-07 13:36:27.722237] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.722264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.297 [2024-10-07 13:36:27.723431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.723460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.723482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.723499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.723512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.723524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.724824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.724850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.731827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.731873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.732038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.732067] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.732084] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.732502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.732532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.732548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.732567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.732711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.732736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.732749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.732762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.732881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.732905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.732919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.732938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.733066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.742101] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.742133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.742247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.742278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.742295] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.742459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.742486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.742503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.742709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.742752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.743334] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.743358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.743377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.743393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.743407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.743419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.743681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.743706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.752637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.752677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.752819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.752847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.752865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.752974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.753000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.753016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.754964] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.754996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.755747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.755772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.755786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.755804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.755818] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.755830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.756361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.756390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.762761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.762808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.762972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.763002] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.763018] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.765674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.765706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.765724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.765743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.765893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.765920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.765934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.765948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.766107] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.766132] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.766146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.766160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.766265] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.773040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.773072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.773209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.773239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.773256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.773371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.773397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.773413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.773438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.773459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.773481] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.773501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.773516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.773533] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.773547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.773560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.773584] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.773601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.783848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.783882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.783988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.784019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.784036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.784111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.784138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.784154] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.784179] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.784201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.784221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.784236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.784249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.784266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.298 [2024-10-07 13:36:27.784281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.298 [2024-10-07 13:36:27.784294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.298 [2024-10-07 13:36:27.784318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.784335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.298 [2024-10-07 13:36:27.795760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.795793] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.298 [2024-10-07 13:36:27.796056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.796086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.298 [2024-10-07 13:36:27.796104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.796213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.298 [2024-10-07 13:36:27.796246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.298 [2024-10-07 13:36:27.796263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.298 [2024-10-07 13:36:27.798330] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.798363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.298 [2024-10-07 13:36:27.799169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.799192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.799213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.799229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.799243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.799255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.799596] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.799636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.806068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.806098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.806301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.806347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.806365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.806454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.806481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.806498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.806605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.806632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.809239] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.809265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.809286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.809304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.809319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.809331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.809840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.809866] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.816316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.816348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.816482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.816512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.816529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.816612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.816639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.816655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.816689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.816712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.816733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.816748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.816761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.816778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.816792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.816805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.816829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.816846] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.829231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.829264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.829412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.829442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.829459] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.829571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.829598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.829614] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.830104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.830133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.830370] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.830394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.830414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.830432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.830447] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.830460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.830780] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.830805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.845022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.845071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.845236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.845266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.845283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.845370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.845397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.845413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.845439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.845461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.845482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.845497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.845510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.845527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.845542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.845555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.845579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.845610] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.856894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.856927] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.857149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.857179] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.857197] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.857273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.857300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.857322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.857431] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.857459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.860246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.860282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.860296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.860313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.860327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.860343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.861298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.861323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.867006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.867050] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.867235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.867263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.867280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.867372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.867398] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.867414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.867432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.867589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.867616] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.867630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.867643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.867809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.867833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.867847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.867861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.867979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.877170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.877208] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.877352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.877380] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.877396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.877503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.877529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.877545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.877741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.877769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.299 [2024-10-07 13:36:27.877817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.877837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.877851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.877869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.299 [2024-10-07 13:36:27.877883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.299 [2024-10-07 13:36:27.877896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.299 [2024-10-07 13:36:27.878078] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.878101] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.299 [2024-10-07 13:36:27.890792] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.890824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.299 [2024-10-07 13:36:27.890956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.890985] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.299 [2024-10-07 13:36:27.891001] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.299 [2024-10-07 13:36:27.891085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.299 [2024-10-07 13:36:27.891111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.299 [2024-10-07 13:36:27.891127] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.891152] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.891174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.891194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.891210] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.891224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.891246] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.891276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.891290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.891316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.891349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.900904] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.900969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.901153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.901181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.901198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.901314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.901340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.901356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.901375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.901401] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.901419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.901432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.901446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.901471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.901488] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.901500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.901514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.904044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.910993] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.911171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.911201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.911218] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.911257] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.911290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.911319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.911336] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.911355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.911379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.911482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.911508] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.911524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.911548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.911572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.911587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.911599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.911623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.924261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.924295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.924506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.924535] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.924552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.924630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.924656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.924682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.924709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.924732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.924975] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.924998] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.925012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.925029] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.925060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.925074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.925141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.925162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.938215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.938250] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.939537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.939570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.939587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.939692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.939719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.939735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.940311] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.940340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.940598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.940624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.940640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.940658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.940682] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.940696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.940749] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.940770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.948789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.948822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.949045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.949075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.949092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.949201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.949227] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.949244] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.949352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.949379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.949509] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.949529] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.949542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.949558] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.949578] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.949592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.953530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.953558] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.959168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.959200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.959496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.959528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.959546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.959631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.959656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.959684] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.960006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.960036] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.960172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.960194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.960210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.960228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.960243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.960256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.960293] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.960313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.970704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.970738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.970899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.970928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.300 [2024-10-07 13:36:27.970945] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.971024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.971049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.971066] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.300 [2024-10-07 13:36:27.971092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.971120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.300 [2024-10-07 13:36:27.971142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.971157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.971171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.971188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.300 [2024-10-07 13:36:27.971203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.300 [2024-10-07 13:36:27.971216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.300 [2024-10-07 13:36:27.971240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.971257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.300 [2024-10-07 13:36:27.983357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.983390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.300 [2024-10-07 13:36:27.983681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.300 [2024-10-07 13:36:27.983711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.300 [2024-10-07 13:36:27.983729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:27.983865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:27.983890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:27.983906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:27.984945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:27.984991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:27.986205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:27.986230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:27.986244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:27.986260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:27.986273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:27.986285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:27.986445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:27.986468] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:27.993470] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:27.995579] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:27.995746] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:27.995785] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:27.995803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:27.996808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:27.996839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:27.996857] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:27.996876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:27.997143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:27.997169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:27.997183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:27.997197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:27.997401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:27.997426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:27.997439] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:27.997453] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:27.997574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.003556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.003702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.003732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.003750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.003789] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.003817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.003832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.003847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.003872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.011593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.011903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.011936] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:28.011954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.012267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.012444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.012476] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.012492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.012599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.013640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.013787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.013815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.013832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.013857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.013881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.013896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.013910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.013934] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.024688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.024912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.025055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.025085] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:28.025102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.025377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.025407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.025424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.025443] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.025495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.025517] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.025531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.025546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.025738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.025778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.025792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.025805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.025871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.034777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.035631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.035663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:28.035691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.040612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.040798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.040835] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.040852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.040865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.040890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.041006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.041033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.041050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.041076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.041100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.041116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.041129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.041153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.046923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.047226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.047259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:28.047277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.047304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.047329] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.047344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.047357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.047381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.050882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.051060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.051088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.051105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.051136] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.051161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.051176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.051190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.051213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.057982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.058124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.058153] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.301 [2024-10-07 13:36:28.058171] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.058626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.058893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.058919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.058935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.301 [2024-10-07 13:36:28.058986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.301 [2024-10-07 13:36:28.060984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.301 [2024-10-07 13:36:28.061197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.301 [2024-10-07 13:36:28.061224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.301 [2024-10-07 13:36:28.061241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.301 [2024-10-07 13:36:28.061268] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.301 [2024-10-07 13:36:28.061292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.301 [2024-10-07 13:36:28.061307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.301 [2024-10-07 13:36:28.061322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.061346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.069170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.069427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.069459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.069478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.069597] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.069716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.069738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.069758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.069880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.074646] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.074826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.074856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.074873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.075072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.075141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.075176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.075191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.075217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.079258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.079498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.079529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.079546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.079572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.079596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.079611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.079624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.079648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.088599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.088762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.088791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.088808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.088834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.088858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.088873] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.088887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.088912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.089338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.089475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.089502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.089519] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.089544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.089568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.089583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.089596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.089620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.102287] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.102321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.102462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.102492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.102508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.102618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.102644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.102660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.102697] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.102720] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.102741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.102756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.102769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.102785] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.102800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.102814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.102838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.102854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.115393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.115427] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.115621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.115649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.115678] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.115803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.115829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.115845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.115953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.115979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.116101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.116137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.116150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.116167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.116181] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.116193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.117317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.117343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.125506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.125554] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.125716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.125745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.125761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.125852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.125877] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.125893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.125912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.125938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.125956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.125970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.125983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.126008] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.126025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.126038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.126052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.126080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.135606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.135781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.135811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.135828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.136101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.136179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.136213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.136245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.136259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.136443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.136570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.136598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.136615] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.136676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.137142] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.137166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.137180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.137412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.149282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.149317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.149982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.150015] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.302 [2024-10-07 13:36:28.150033] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.150147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.150173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.150189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.150568] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.150599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.302 [2024-10-07 13:36:28.150692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.150720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.150735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.150752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.302 [2024-10-07 13:36:28.150768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.302 [2024-10-07 13:36:28.150781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.302 [2024-10-07 13:36:28.150979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.151002] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.302 [2024-10-07 13:36:28.162205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.162239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.302 [2024-10-07 13:36:28.162462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.302 [2024-10-07 13:36:28.162492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.302 [2024-10-07 13:36:28.162509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.302 [2024-10-07 13:36:28.162643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.162678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.162697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.162807] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.162834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.165018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.165045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.165059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.165077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.165091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.165104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.165989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.166015] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.172319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.172365] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.172527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.172555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.172572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.172706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.172733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.172749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.172768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.172795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.172814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.172827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.172840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.172865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.172882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.172896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.172910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.172933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.182403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.182528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.182557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.182574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.182786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.182877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.182911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.182928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.182942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.182967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.183085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.183112] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.183130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.183587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.183839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.183865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.183880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.183932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.195502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.195536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.196111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.196157] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.196176] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.196290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.196316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.196332] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.196553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.196581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.196802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.196826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.196840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.196858] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.196872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.196886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.197158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.197182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.211547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.211581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.212216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.212248] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.212265] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.212342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.212367] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.212383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.212808] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.212837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.213068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.213094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.213114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.213133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.213148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.213161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.213226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.213261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.225454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.225488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.226460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.226492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.226509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.226625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.226651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.226674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.227092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.227137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.227362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.227388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.227403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.227421] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.227436] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.227466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.227547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.227568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.236564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.236597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.236830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.236860] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.236877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.236987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.237012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.237034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.237144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.237171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.237301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.237321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.237334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.237351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.237365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.237376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.237476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.237496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.246698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.246746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.246867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.246895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.246912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.247018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.247043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.303 [2024-10-07 13:36:28.247059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.303 [2024-10-07 13:36:28.247077] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.247102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.303 [2024-10-07 13:36:28.247120] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.247133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.247146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.247171] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.247188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.303 [2024-10-07 13:36:28.247200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.303 [2024-10-07 13:36:28.247213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.303 [2024-10-07 13:36:28.247234] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.303 [2024-10-07 13:36:28.257087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.257126] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.303 [2024-10-07 13:36:28.257261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.303 [2024-10-07 13:36:28.257289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.303 [2024-10-07 13:36:28.257306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.257416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.257442] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.257458] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.257484] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.257506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.257527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.257543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.257557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.257574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.257589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.257602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.257642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.257659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.267708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.267742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.270435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.270468] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.270486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.270624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.270649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.270674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.272217] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.272249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.272305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.272325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.272339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.272362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.272378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.272390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.272415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.272431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.277823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.277871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.278055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.278083] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.278100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.278212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.278239] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.278255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.278274] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.278300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.278319] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.278332] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.278345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.278371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.278388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.278401] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.278414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.278453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.287911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.288260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.288292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.288310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.288376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.288569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.288605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.288647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.288662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.288739] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.288856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.288883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.288899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.289083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.289170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.289191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.289205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.289230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.302275] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.302309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.302425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.302454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.302471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.302560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.302586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.302602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.302627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.302648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.302682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.302712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.302726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.302744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.302759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.302772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.302795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.302812] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.316340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.316374] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.316503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.316533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.316551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.316637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.316664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.316691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.316718] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.316739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.316760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.316775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.316789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.316805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.316819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.316834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.316859] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.316890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.330868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.330902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.331488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.331520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.331538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.331650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.331688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.331706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.331927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.331968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.304 [2024-10-07 13:36:28.332017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.332043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.332058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.332075] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.304 [2024-10-07 13:36:28.332095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.304 [2024-10-07 13:36:28.332111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.304 [2024-10-07 13:36:28.332137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.332154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.304 [2024-10-07 13:36:28.344214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.344249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.304 [2024-10-07 13:36:28.344507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.344536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.304 [2024-10-07 13:36:28.344554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.344677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.304 [2024-10-07 13:36:28.344714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.304 [2024-10-07 13:36:28.344730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.304 [2024-10-07 13:36:28.346888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.346921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.347910] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.347935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.347949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.347967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.347995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.348008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.348470] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.348494] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.354330] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.354376] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.354493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.354520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.354536] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.354689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.354716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.354732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.354757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.354785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.354803] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.354815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.354829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.354854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.354871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.354884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.354896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.354919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.364537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.364585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.364707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.364736] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.364753] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.364864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.364890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.364906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.365106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.365149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.365226] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.365248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.365262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.365296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.365311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.365323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.365348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.365365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.377902] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.377936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.378367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.378400] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.378418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.378548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.378574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.378590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.378806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.378835] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.379025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.379048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.379062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.379081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.379096] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.379109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.379153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.379174] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.391801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.391835] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.392175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.392206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.392224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.392315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.392341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.392358] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.392638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.392680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.392891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.392916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.392931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.392950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.392970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.392984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.393187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.393210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.406573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.406621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.406743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.406771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.406789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.406886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.406912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.406928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.406955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.406976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.406997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.407013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.407026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.407043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.407057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.407070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.407094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.407111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.422693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.422726] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.422939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.422969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.422986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.423128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.423154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.423170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.423632] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.423690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.424009] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.424035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.424065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.424084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.424098] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.424110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.424347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.424371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.437106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.437139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.437306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.437337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.437355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.437463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.437491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.437507] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.437532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.437553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.437574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.437589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.437602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.437619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.437633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.437646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.437684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.437704] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.305 [2024-10-07 13:36:28.448343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.448377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.305 [2024-10-07 13:36:28.448622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.448654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.305 [2024-10-07 13:36:28.448687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.448780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.305 [2024-10-07 13:36:28.448809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.305 [2024-10-07 13:36:28.448826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.305 [2024-10-07 13:36:28.448940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.448968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.305 [2024-10-07 13:36:28.449104] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.449128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.449143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.305 [2024-10-07 13:36:28.449161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.305 [2024-10-07 13:36:28.449191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.305 [2024-10-07 13:36:28.449204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.449354] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.449378] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.458456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.458502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.458684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.458714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.458732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.458824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.458851] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.458868] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.458887] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.458913] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.458931] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.458945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.458958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.458998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.459016] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.459034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.459047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.459071] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.469697] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.469732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.470049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.470080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.470098] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.470233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.470260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.470276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.470326] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.470351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.470373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.470389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.470402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.470419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.470434] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.470446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.470472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.470488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.480065] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.480097] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.482877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.482910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.482929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.483056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.483082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.483097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.484126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.484156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.484807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.484833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.484848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.484865] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.484880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.484892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.485124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.485148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.490176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.490221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.490364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.490391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.490408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.490701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.490730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.490747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.490766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.490923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.490950] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.490964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.490978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.491102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.491126] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.491140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.491154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.491259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.500361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.500412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.500542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.500571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.500594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.500923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.500953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.500971] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.500990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.501042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.501065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.501078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.501092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.501275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.501315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.501330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.501343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.501405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.514064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.514098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.514240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.514270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.514288] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.514369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.514396] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.514413] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.514439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.514461] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.514482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.514497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.514511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.514528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.514542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.514556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.514585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.514603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.524178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.524225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.524468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.524497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.524515] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.524628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.524655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.524680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.524701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.527337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.527366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.527381] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.527395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.530144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.530173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.530187] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.530201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.531103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.534502] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.534534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.534701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.534732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.534750] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.534859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.534885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.534902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.534927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.534949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.534976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.534992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.535006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.535023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.535038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.535051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.535075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.535092] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.547052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.547085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.547418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.547450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.547467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.547546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.547572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.547589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.548101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.548131] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.548361] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.548386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.548401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.548418] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.548433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.548445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.548648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.548683] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.557372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.557405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.557661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.557699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.306 [2024-10-07 13:36:28.557728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.557815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.557843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.306 [2024-10-07 13:36:28.557859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.306 [2024-10-07 13:36:28.560953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.560986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.306 [2024-10-07 13:36:28.562026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.562052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.562066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.562083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.306 [2024-10-07 13:36:28.562097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.306 [2024-10-07 13:36:28.562110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.306 [2024-10-07 13:36:28.562742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.562768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.306 [2024-10-07 13:36:28.567484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.567530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.306 [2024-10-07 13:36:28.567674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.306 [2024-10-07 13:36:28.567705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.567722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.567835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.567863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.567879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.567897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.567923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.567942] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.567955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.567968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.567993] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.568010] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.568024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.568037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.570393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.577583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.577790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.577822] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.577840] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.577867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.577898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.577999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.578025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.578042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.578057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.578069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.578082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.578375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.578404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.578473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.578494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.578509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.578534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.587909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.590290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.590323] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.590341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.591375] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.591853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.591890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.591907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.591930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.592172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.592265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.592295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.592321] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.592646] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.592896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.592929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.592943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.592995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.598140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.598498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.598529] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.598546] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.598572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.598596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.598611] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.598624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.598648] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.602616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.602818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.602849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.602867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.602975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.603103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.603125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.603139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.606090] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.608448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.608588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.608618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.608636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.608661] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.608708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.608729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.608744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.608942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.612723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.612869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.612899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.612915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.612940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.612963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.612978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.612992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.613016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 8406.00 IOPS, 32.84 MiB/s [2024-10-07T11:36:38.019Z] [2024-10-07 13:36:28.623785] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.624017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.624167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.624198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.624216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.624511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.624542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.624559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.624578] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.624795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.624823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.624837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.624851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.624916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.624949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.624979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.624993] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.625017] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.634206] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.634238] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.634430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.634458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.634476] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.634592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.634618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.634634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.634659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.634693] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.634716] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.634735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.634748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.634765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.634780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.634792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.637342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.637369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.644320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.644370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.644526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.644553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.644570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.644685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.644711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.644728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.644746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.644771] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.644790] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.644803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.644821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.644847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.644864] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.644877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.644890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.644913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.656232] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.656267] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.656377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.656407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.656424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.656557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.656583] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.656599] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.656624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.656645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.656676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.656693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.656719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.656736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.656750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.656762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.656797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.656814] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.666348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.666396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.307 [2024-10-07 13:36:28.666529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.666556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.307 [2024-10-07 13:36:28.666572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.666731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.307 [2024-10-07 13:36:28.666758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.307 [2024-10-07 13:36:28.666780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.307 [2024-10-07 13:36:28.666799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.666826] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.307 [2024-10-07 13:36:28.666844] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.307 [2024-10-07 13:36:28.666857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.307 [2024-10-07 13:36:28.666871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.307 [2024-10-07 13:36:28.669484] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.307 [2024-10-07 13:36:28.669512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.669527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.669540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.672331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.676722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.676755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.676918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.676947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.676964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.677072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.677098] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.677113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.677139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.677161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.677182] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.677198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.677211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.677228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.677243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.677257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.677282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.677314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.689245] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.689284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.689663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.689703] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.689721] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.689855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.689881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.689897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.690384] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.690414] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.690644] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.690678] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.690696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.690715] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.690731] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.690744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.690947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.690970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.702270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.702302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.702645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.702684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.702704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.702813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.702839] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.702855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.702906] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.702931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.702953] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.702968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.702981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.703004] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.703020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.703034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.703183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.703207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.714657] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.714698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.714916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.714944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.714961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.715072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.715099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.715115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.715224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.715251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.715369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.715389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.715417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.715434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.715448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.715461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.715561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.715581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.724781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.724829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.724969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.724998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.725015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.725123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.725148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.725170] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.725190] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.725216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.725235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.725249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.725262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.725288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.725305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.725319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.725332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.725355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.734869] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.735022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.735052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.735068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.735282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.735358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.735405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.735423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.735437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.735462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.735578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.735604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.735621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.735815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.735872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.735893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.735907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.735931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.748349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.748383] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.748781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.748813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.748831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.748914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.748939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.748955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.749244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.749290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.749524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.749550] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.749565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.749583] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.749598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.749612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.749856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.749880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.759039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.759072] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.759331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.759362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.759380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.759486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.759512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.759530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.759638] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.759673] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.759793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.759815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.759829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.759847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.759867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.759881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.760080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.760102] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.770698] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.770733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.770957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.770988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.771006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.771118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.771144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.771161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.771269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.771297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.771331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.771365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.771378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.771395] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.771424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.771436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.771462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.771477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.781518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.781552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.781695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.781725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.308 [2024-10-07 13:36:28.781742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.781823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.781849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.781865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.308 [2024-10-07 13:36:28.781896] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.781919] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.308 [2024-10-07 13:36:28.781940] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.781956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.781969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.781986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.308 [2024-10-07 13:36:28.782000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.308 [2024-10-07 13:36:28.782013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.308 [2024-10-07 13:36:28.782037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.782054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.308 [2024-10-07 13:36:28.792264] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.792298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.308 [2024-10-07 13:36:28.792435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.308 [2024-10-07 13:36:28.792464] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.308 [2024-10-07 13:36:28.792481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.792593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.792618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.792634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.792833] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.792862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.792911] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.792931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.792945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.792963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.792978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.792991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.793172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.793195] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.805715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.805749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.805977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.806011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.806030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.806118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.806144] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.806161] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.806332] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.806360] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.806424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.806444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.806457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.806490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.806505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.806519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.806543] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.806560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.821999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.822032] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.822274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.822303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.822319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.822401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.822427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.822443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.822706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.822735] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.822874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.822897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.822911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.822930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.822945] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.822964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.823013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.823035] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.835106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.835141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.835283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.835311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.835327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.835433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.835458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.835473] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.836747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.836779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.836817] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.836836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.836850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.836868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.836883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.836895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.836919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.836935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.850743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.850778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.850882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.850910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.850928] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.851059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.851084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.851100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.851126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.851153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.851175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.851190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.851203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.851221] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.851236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.851248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.851272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.851289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.863478] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.863512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.865536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.865569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.865587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.865707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.865733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.865749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.866462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.866492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.866904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.866932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.866947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.866964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.866992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.867004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.867233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.867259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.873592] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.873637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.873811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.873840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.873864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.876615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.876647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.876664] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.876694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.877494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.877521] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.877535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.877547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.877743] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.877767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.877781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.877795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.877902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.883798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.883831] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.883947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.883975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.883992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.884079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.884105] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.884121] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.884146] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.884167] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.884188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.884203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.884216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.884233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.884247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.884269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.884295] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.884312] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.895796] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.895828] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.895935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.895962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.895978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.896064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.896091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.896108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.896133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.896155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.896176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.896191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.896204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.896222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.896235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.896248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.896272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.896288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.907842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.907876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.908111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.908141] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.309 [2024-10-07 13:36:28.908158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.908364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.309 [2024-10-07 13:36:28.908392] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.309 [2024-10-07 13:36:28.908408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.309 [2024-10-07 13:36:28.908532] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.908559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.309 [2024-10-07 13:36:28.908706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.908730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.908744] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.908761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.309 [2024-10-07 13:36:28.908776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.309 [2024-10-07 13:36:28.908789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.309 [2024-10-07 13:36:28.908896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.908917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.309 [2024-10-07 13:36:28.917957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.309 [2024-10-07 13:36:28.918007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.918165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.918195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.918212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.919305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.919335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.919352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.919371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.919577] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.919605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.919620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.919633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.919765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.919791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.919806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.919820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.919931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.928225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.928395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.928426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.928443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.928475] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.928507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.928628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.928655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.928682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.928699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.928713] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.928726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.928911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.928940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.929005] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.929041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.929055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.929096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.942172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.942205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.942598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.942630] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.942648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.942750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.942776] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.942792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.943264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.943293] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.943537] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.943562] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.943577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.943594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.943609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.943621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.943839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.943865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.953058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.953091] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.953245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.953276] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.953294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.953414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.953439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.953455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.954296] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.954327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.956909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.956934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.956948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.956979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.956994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.957006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.957302] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.957328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.963476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.963508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.963755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.963786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.963804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.963882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.963907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.963923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.964041] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.964070] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.964172] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.964199] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.964214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.964232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.964247] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.964259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.966576] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.966603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.973587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.973633] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.973820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.973850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.973867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.973985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.974013] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.974029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.974048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.974074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.974092] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.974105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.974118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.974143] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.974160] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.974173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.974186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.974208] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.985583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.985617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.985998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.986030] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.986047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.986162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.986187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.986203] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.986254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.986279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.986300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.986316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.986329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.986346] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.986360] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.986373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.986397] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.986413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.995707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.995755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:28.995933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.995962] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:28.995980] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.996072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:28.996099] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:28.996116] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:28.996134] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.996161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:28.996179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.996192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.996205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.996230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:28.996247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:28.996260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:28.996273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:28.996301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:29.008282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.008317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.008593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.008624] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:29.008641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:29.008759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.008786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:29.008803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:29.008976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:29.009006] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:29.009067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:29.009104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:29.009118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:29.009135] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:29.009150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:29.009163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:29.009188] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:29.009204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:29.019111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.019144] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.019332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.019361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.310 [2024-10-07 13:36:29.019378] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:29.019523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.019550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:29.019566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:29.019592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:29.019614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.310 [2024-10-07 13:36:29.019635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:29.019650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:29.019679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:29.019699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.310 [2024-10-07 13:36:29.019715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.310 [2024-10-07 13:36:29.019728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.310 [2024-10-07 13:36:29.019753] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:29.019770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.310 [2024-10-07 13:36:29.031029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.031063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.310 [2024-10-07 13:36:29.031359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.031391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.310 [2024-10-07 13:36:29.031409] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.310 [2024-10-07 13:36:29.031515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.310 [2024-10-07 13:36:29.031542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.031559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.033089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.033120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.033792] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.033817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.033831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.033847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.033862] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.033875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.034128] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.034153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.041375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.041406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.041612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.041642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.041659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.041774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.041807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.041824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.041849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.041871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.041892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.041907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.041921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.041938] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.041952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.041965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.041990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.042021] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.051485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.051531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.051646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.051702] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.051722] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.051867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.051895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.051912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.051931] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.052203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.052232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.052246] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.052259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.052389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.052414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.052428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.052441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.052546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.063991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.064026] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.064380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.064412] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.064429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.064519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.064544] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.064560] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.064776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.064806] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.064895] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.064919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.064934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.064952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.064966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.064979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.065153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.065177] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.078313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.078347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.079406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.079438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.079455] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.079544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.079570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.079586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.079708] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.079737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.079760] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.079775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.079794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.079813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.079827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.079840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.080050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.080075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.092248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.092282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.092419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.092448] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.092465] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.092575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.092601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.092617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.092643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.092664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.093017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.093041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.093054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.093072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.093101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.093113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.094058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.094082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.105765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.105799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.106045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.106075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.106093] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.106198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.106224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.106246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.107876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.107907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.108419] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.108442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.108456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.108472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.108486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.108498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.108747] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.108773] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.120154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.120186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.120480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.120510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.120527] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.120606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.120633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.120649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.121641] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.121696] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.121813] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.121835] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.121849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.121866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.121881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.121893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.121918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.121935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.130279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.130333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.130549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.130578] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.130595] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.130738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.130767] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.130783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.130802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.133434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.133462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.133477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.133490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.135211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.135240] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.135255] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.135268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.136264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.311 [2024-10-07 13:36:29.140714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.140745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.311 [2024-10-07 13:36:29.140854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.140880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.311 [2024-10-07 13:36:29.140897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.140983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.311 [2024-10-07 13:36:29.141010] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.311 [2024-10-07 13:36:29.141026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.311 [2024-10-07 13:36:29.141051] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.141072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.311 [2024-10-07 13:36:29.141093] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.311 [2024-10-07 13:36:29.141107] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.311 [2024-10-07 13:36:29.141121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.311 [2024-10-07 13:36:29.141143] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.141159] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.141172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.141196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.141212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.152812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.152846] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.153176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.153208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.153225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.153334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.153361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.153377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.153427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.153453] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.153475] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.153490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.153503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.153520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.153535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.153548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.153706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.153731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.163950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.163985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.164405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.164436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.164454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.164569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.164595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.164611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.164674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.164701] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.164746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.164766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.164780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.164798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.164813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.164826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.164850] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.164867] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.175741] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.175774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.176204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.176235] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.176252] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.176340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.176366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.176382] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.176511] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.176540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.178441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.178467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.178481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.178499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.178514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.178527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.179357] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.179381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.186043] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.186094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.186427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.186457] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.186475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.186590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.186617] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.186633] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.186741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.186770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.186793] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.186808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.186822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.186838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.186867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.186880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.186905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.186920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.196183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.196228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.196392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.196421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.196438] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.196523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.196551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.196568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.196586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.196612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.196631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.196643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.196656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.196687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.196712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.196726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.196739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.196762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.209408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.209442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.209643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.209682] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.209701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.209810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.209838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.209854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.210112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.210157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.210675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.210700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.210714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.210730] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.210745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.210758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.210989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.211014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.225312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.225347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.226128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.226159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.226177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.226321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.226349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.226365] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.226821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.226858] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.226920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.226940] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.226953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.226971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.226985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.226998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.227455] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.227478] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.236481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.236513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.239074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.239106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.239124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.239232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.239257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.239272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.240622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.240675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.241253] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.241277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.241290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.241307] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.241320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.241333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.241421] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.241442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.246594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.246639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.246803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.246837] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.246855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.246977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.247004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.247020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.247039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.247065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.247084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.247097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.247110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.249119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.249146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.249161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.249174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.249509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.312 [2024-10-07 13:36:29.256719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.256870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.256900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.312 [2024-10-07 13:36:29.256917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.256944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.312 [2024-10-07 13:36:29.256976] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.312 [2024-10-07 13:36:29.257197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.312 [2024-10-07 13:36:29.257225] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.312 [2024-10-07 13:36:29.257241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.312 [2024-10-07 13:36:29.257256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.312 [2024-10-07 13:36:29.257269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.312 [2024-10-07 13:36:29.257281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.312 [2024-10-07 13:36:29.257306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.257327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.257350] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.257370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.257384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.257407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.269915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.269967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.270320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.270351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.270369] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.270505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.270532] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.270548] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.270599] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.270624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.270646] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.270661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.270685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.270703] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.270718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.270731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.270755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.270772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.280641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.280682] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.280976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.281007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.281024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.281110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.281138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.281155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.281263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.281299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.283650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.283685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.283701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.283719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.283734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.283747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.284754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.284779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.291040] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.291086] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.291255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.291284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.291301] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.291391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.291418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.291434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.291732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.291759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.291781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.291796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.291809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.291825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.291839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.291850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.291874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.291890] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.301169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.301219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.301458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.301488] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.301511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.301839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.301868] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.301885] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.301904] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.302110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.302137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.302152] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.302165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.302369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.302394] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.302408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.302422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.302471] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.314964] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.315011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.315579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.315610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.315627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.315751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.315777] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.315793] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.316199] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.316229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.316458] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.316482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.316496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.316513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.316528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.316541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.316613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.316649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.325312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.325360] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.327241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.327275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.327292] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.327379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.327405] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.327421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.329677] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.329710] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.330420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.330445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.330459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.330477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.330493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.330505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.330862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.330888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.335840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.335871] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.336014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.336043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.336060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.336136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.336162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.336178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.336712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.336741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.336880] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.336904] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.336919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.336936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.336951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.336963] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.337004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.337024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.346025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.346058] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.346194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.346224] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.346241] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.346374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.346402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.346418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.346655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.346694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.346762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.346784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.346798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.346815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.346829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.346842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.347025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.347064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.358278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.358326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.358543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.358574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.313 [2024-10-07 13:36:29.358591] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.358702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.358730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.358747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.358772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.358794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.313 [2024-10-07 13:36:29.358814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.358830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.358843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.358860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.313 [2024-10-07 13:36:29.358874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.313 [2024-10-07 13:36:29.358887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.313 [2024-10-07 13:36:29.358911] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.358928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.313 [2024-10-07 13:36:29.374738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.374772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.313 [2024-10-07 13:36:29.375186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.313 [2024-10-07 13:36:29.375218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.313 [2024-10-07 13:36:29.375235] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.313 [2024-10-07 13:36:29.375311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.375337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.375354] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.375594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.375624] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.376220] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.376244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.376257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.376273] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.376287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.376299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.376552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.376583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.386700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.386734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.386959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.386989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.387006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.387114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.387140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.387156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.389322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.389354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.390296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.390320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.390334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.390352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.390366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.390379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.390590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.390613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.403270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.403302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.403929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.403961] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.403978] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.404089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.404114] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.404130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.404487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.404516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.404761] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.404792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.404808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.404826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.404840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.404853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.404905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.404925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.418274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.418306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.418703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.418735] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.418752] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.418840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.418865] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.418881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.419086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.419116] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.419316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.419340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.419355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.419373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.419388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.419401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.419465] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.419500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.432587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.432620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.433291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.433322] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.433339] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.433456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.433496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.433514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.433746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.433776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.433977] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.434001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.434016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.434033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.434048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.434061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.434292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.434318] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.442878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.445566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.445682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.445711] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.445728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.446390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.446420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.446437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.446457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.448104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.448133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.448148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.448161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.448888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.448914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.448928] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.448941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.449207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.452966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.453139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.453168] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.453185] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.453210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.453234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.453250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.453263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.453288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.455842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.455971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.456001] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.456024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.456474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.456505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.456535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.456548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.456572] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.463306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.463454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.463483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.463501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.463526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.463551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.463566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.463580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.463605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.467471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.467632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.467675] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.467699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.467725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.467750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.467765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.467778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.467803] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.479025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.479074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.479234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.479263] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.479280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.479394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.479420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.479436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.479455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.479481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.479500] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.479513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.479526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.479551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.479568] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.479581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.479594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.479616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.314 [2024-10-07 13:36:29.494725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.494758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.314 [2024-10-07 13:36:29.494921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.494951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.314 [2024-10-07 13:36:29.494969] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.495081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.314 [2024-10-07 13:36:29.495108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.314 [2024-10-07 13:36:29.495133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.314 [2024-10-07 13:36:29.495160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.495181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.314 [2024-10-07 13:36:29.495204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.314 [2024-10-07 13:36:29.495219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.314 [2024-10-07 13:36:29.495233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.314 [2024-10-07 13:36:29.495250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.495264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.495277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.495301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.495317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.507013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.507048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.509588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.509621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.509639] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.509755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.509782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.509799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.510834] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.510865] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.511376] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.511400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.511421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.511437] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.511451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.511463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.511981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.512005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.517131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.517513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.517690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.517719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.517736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.517887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.517916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.517933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.517952] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.517978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.517996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.518010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.518023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.518048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.518066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.518079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.518093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.518115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.527306] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.527448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.527479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.527496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.527521] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.527546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.527561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.527575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.527894] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.527992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.528145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.528173] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.528190] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.528394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.528465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.528485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.528514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.528539] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.540190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.540222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.540362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.540393] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.540410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.540494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.540522] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.540538] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.540563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.540584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.540605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.540620] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.540633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.540650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.540664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.540689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.540714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.540731] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.553593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.553628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.556172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.556205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.556222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.556329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.556354] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.556375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.557436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.557466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.558087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.558111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.558132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.558148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.558162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.558174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.558453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.558479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.563719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.563749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.563909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.563938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.563955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.564033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.564059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.564076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.566379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.566411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.566872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.566897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.566917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.566935] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.566966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.566978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.567125] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.567148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.573829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.573880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.574015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.574045] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.574063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.574374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.574404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.574420] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.574439] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.574491] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.574513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.574526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.574540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.574565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.574582] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.574595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.574608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.574632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.586916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.586949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.587313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.587345] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.587364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.587469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.587497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.587514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.587768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.587798] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.587850] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.587872] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.587886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.587904] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.587924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.587938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.588191] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.588216] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.601006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.601041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.601376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.601408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.601427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.601534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.601561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.601578] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.602099] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.602129] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.602369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.602394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.602409] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.602426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.602442] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.602457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.315 [2024-10-07 13:36:29.602707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.602732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.315 [2024-10-07 13:36:29.612461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.612494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.315 [2024-10-07 13:36:29.612798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.612829] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.315 [2024-10-07 13:36:29.612847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.612933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.315 [2024-10-07 13:36:29.612971] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.315 [2024-10-07 13:36:29.612987] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.315 [2024-10-07 13:36:29.613115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.613143] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.315 [2024-10-07 13:36:29.617368] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.315 [2024-10-07 13:36:29.617397] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.315 [2024-10-07 13:36:29.617415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.617433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.617448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.617460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.617960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.618000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 8442.50 IOPS, 32.98 MiB/s [2024-10-07T11:36:38.028Z] [2024-10-07 13:36:29.622574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.622619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.622756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.622786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.622804] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.623045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.623075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.623092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.623111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.623260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.623287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.623302] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.623316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.623442] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.623466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.623481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.623495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.623605] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.632703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.632752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.632858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.632888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.632905] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.633214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.633243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.633261] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.633280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.633527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.633555] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.633569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.633584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.633655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.633686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.633716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.633730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.633755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.643396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.643431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.643729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.643762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.643780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.643895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.643922] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.643939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.644048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.644076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.644181] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.644203] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.644217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.644236] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.644258] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.644271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.644380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.644401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.653513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.653562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.654450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.654483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.654511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.654631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.654657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.654686] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.654706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.654732] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.654752] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.654766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.654779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.654804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.654823] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.654836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.654851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.654874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.664748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.664782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.664947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.664977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.664995] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.665091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.665118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.665134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.665160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.665187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.665210] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.665225] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.665240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.665257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.665272] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.665285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.665309] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.665327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.679677] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.679711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.679933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.679976] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.679994] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.680727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.680759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.680777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.680803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.680825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.680846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.680860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.680873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.680890] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.680905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.680918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.680941] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.680958] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.690124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.690158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.690443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.690479] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.690498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.690636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.690663] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce5910 with addr=10.0.0.2, port=4420 00:25:56.316 [2024-10-07 13:36:29.690695] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce5910 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.690803] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.690830] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce5910 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.690981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.691005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.691019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.691036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.691056] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.691069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.691205] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.691228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.700242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.700289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.700533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.700562] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.700579] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.700628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.700682] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.700714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.700727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.700762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.712535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.712892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.712925] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.712944] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.713037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.713085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.713104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.713117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.713387] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.726899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.727095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.727126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.727145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.316 [2024-10-07 13:36:29.727192] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.316 [2024-10-07 13:36:29.727230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.316 [2024-10-07 13:36:29.727249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.316 [2024-10-07 13:36:29.727263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.316 [2024-10-07 13:36:29.727287] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:56.316 [2024-10-07 13:36:29.727304] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.316 [2024-10-07 13:36:29.737212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.316 [2024-10-07 13:36:29.737390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.316 [2024-10-07 13:36:29.737419] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.316 [2024-10-07 13:36:29.737436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.737883] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.737916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.737932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.737946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.737997] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.747313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.747513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.747542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.747559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.747584] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.747609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.747624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.747637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.747680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.760661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.761033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.761065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.761083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.761294] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.761369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.761391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.761405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.761430] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.775457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.776503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.776536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.776554] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.776954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.777207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.777233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.777249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.777301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.785547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.785703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.785732] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.785749] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.785774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.785798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.785813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.785827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.785852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.795630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.795818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.795852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.795870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.797189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.798193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.798218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.798231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.799018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.809450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.809757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.809791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.809810] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.809862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.809891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.809907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.809921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.809945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.820333] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.820534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.820565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.820582] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.820703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.820815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.820836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.820851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.823742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.830533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.830749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.830778] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.830796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.830821] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.830853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.830869] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.830882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.830908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.840773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.840915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.840945] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.840963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.841147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.841232] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.841254] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.841269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.841294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.853085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.854901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.854934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.854951] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.855413] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.855825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.855851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.855866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.856383] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.864580] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.864848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.864880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.864898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.865008] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.865146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.865183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.865197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.865314] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.875377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.875610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.875642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.875660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.875779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.875905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.875926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.875939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.876055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.885852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.885979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.886009] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.886026] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.886211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.886282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.886317] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.886332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.886358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.899877] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.900028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.900057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.900075] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.900101] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.900125] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.900141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.900154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.900592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.915179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.915755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.915788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.915811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.916030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.916087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.916108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.916122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.916147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.930103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.930332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.930362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.930379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.930405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.930470] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.930492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.930506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.930531] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.943083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.943494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.943527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.943545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.943763] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.943821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.943842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.943856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.943881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.958563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.958970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.959003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.959021] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.959539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.959802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.959834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.959850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.960064] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.970205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.970434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.970463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.970482] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.974266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.974896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.974922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.974936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.975228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.980294] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.980470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.317 [2024-10-07 13:36:29.980499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.317 [2024-10-07 13:36:29.980516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.317 [2024-10-07 13:36:29.980542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.317 [2024-10-07 13:36:29.980566] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.317 [2024-10-07 13:36:29.980581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.317 [2024-10-07 13:36:29.980594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.317 [2024-10-07 13:36:29.980619] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.317 [2024-10-07 13:36:29.990574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.317 [2024-10-07 13:36:29.990714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:29.990744] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:29.990762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:29.990947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:29.991018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:29.991038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:29.991051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:29.991093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.004212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.004378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.004409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.004427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.004454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.004505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.004525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.004539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.004563] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.014311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.014500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.014530] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.014547] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.014573] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.014624] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.014644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.014658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.014694] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.024537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.024698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.024729] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.024746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.024772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.024822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.024842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.024857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.024882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.037624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.037812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.037843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.037860] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.037894] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.037919] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.037934] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.037948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.037973] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.052381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.052512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.052542] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.052559] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.052585] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.052609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.052624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.052639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.052663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.066494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.068920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.068955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.068973] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.069772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.070159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.070183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.070197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.070276] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.076583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.076826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.076857] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.076875] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.076901] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.076924] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.076939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.076959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.076985] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.086734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.086889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.086918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.086937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.086961] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.086986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.087001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.087014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.087211] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.100151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.100551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.100585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.100603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.100819] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.101037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.101063] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.101078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.101132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.112814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.116504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.116538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.116556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.117091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.117367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.117393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.117407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.117611] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.122901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.123083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.123111] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.123128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.123153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.123177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.123192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.123205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.123229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.133021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.133425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.133456] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.133474] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.133526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.133720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.133744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.133759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.133810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.148503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.148647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.148685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.148704] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.148730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.148754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.148769] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.148783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.148808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.164002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.164383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.164416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.164434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.164640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.164871] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.164896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.164912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.164964] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.180196] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.180560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.180593] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.180611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.180825] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.180882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.180902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.180917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.181099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.195857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.196267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.196300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.196318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.196522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.196579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.196599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.196614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.196640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.212155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.212735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.212768] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.212786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.213004] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.213061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.213082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.213096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.213284] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.228222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.228775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.228807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.228825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.229042] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.229251] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.229276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.229290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.318 [2024-10-07 13:36:30.229341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.318 [2024-10-07 13:36:30.243259] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.318 [2024-10-07 13:36:30.243380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.318 [2024-10-07 13:36:30.243409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.318 [2024-10-07 13:36:30.243426] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.318 [2024-10-07 13:36:30.243452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.318 [2024-10-07 13:36:30.243476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.318 [2024-10-07 13:36:30.243491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.318 [2024-10-07 13:36:30.243505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.243530] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.255986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.256190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.256220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.256238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.256347] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.258276] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.258303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.258318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.258403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.266222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.266417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.266446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.266468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.266495] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.266519] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.266534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.266548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.266571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.277305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.277430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.277459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.277477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.277503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.277544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.277563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.277577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.277603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.289493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.289663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.289698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.289715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.289741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.289765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.289780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.289794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.289818] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.304249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.304464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.304493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.304510] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.304537] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.304567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.304583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.304597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.304622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.320343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.320517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.320547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.320564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.320590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.320614] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.320629] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.320643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.320676] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.331122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.331379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.331411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.331430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.331540] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.331676] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.331699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.331713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.332684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.342315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.342482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.342511] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.342528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.342553] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.342577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.342592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.342605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.342629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.352405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.352587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.352616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.352634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.352659] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.352693] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.352719] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.352733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.352757] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.364552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.365395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.365428] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.365446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.365860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.366084] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.366109] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.366124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.366175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.374639] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.374817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.374846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.374862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.374888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.374913] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.374929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.374943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.374967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.384822] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.384948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.384977] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.385005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.385031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.385055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.385071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.385085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.385110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.397722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.398095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.398128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.398146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.398366] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.398423] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.398444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.398458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.398483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.409661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.411888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.411921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.411939] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.412704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.413146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.413172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.413202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.413279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.419758] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.421919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.421952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.421970] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.422144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.422257] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.422283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.422298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.422407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.429843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.430017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.430046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.430064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.430089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.430114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.430129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.430142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.430166] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.441953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.442371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.442404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.442422] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.442637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.442854] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.442880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.442895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.442947] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.455768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.456293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.456325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.456342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.456419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.456983] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.457010] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.457023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.457262] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.471725] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.471868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.471898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.471915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.471940] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.471964] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.471979] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.471992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.472016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.486390] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.486716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.486748] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.319 [2024-10-07 13:36:30.486766] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.319 [2024-10-07 13:36:30.486994] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.319 [2024-10-07 13:36:30.487052] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.319 [2024-10-07 13:36:30.487072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.319 [2024-10-07 13:36:30.487086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.319 [2024-10-07 13:36:30.487112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.319 [2024-10-07 13:36:30.501541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.319 [2024-10-07 13:36:30.501923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.319 [2024-10-07 13:36:30.501956] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.501974] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.502219] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.502433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.502458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.502472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.502525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.516453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.516825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.516859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.516877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.517368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.517608] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.517634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.517650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.517872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.527757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.527985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.528017] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.528035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.530267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.530698] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.530724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.530740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.531736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.537845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.537980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.538023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.538040] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.538064] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.538088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.538103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.538131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.538157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.548178] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.548350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.548379] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.548396] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.548581] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.548653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.548698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.548720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.548746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.560468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.560825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.560859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.560877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.561083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.561291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.561316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.561331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.561382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.574555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.575273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.575305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.575323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.575729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.575956] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.575982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.575997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.576049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.585179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.585379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.585409] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.585427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.585551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.585704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.585727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.585741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.585848] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.595297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.595503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.595537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.595556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.595582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.595606] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.595621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.595635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.595660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.606060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.606234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.606264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.606281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.606306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.606331] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.606347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.606360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.606385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.620722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.621083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.621116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.621134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 8436.33 IOPS, 32.95 MiB/s [2024-10-07T11:36:38.032Z] [2024-10-07 13:36:30.623610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.623767] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.623790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.623804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.623829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.632069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.632299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.632330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.632348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.635974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.636609] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.636636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.636675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.636937] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.642155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.642311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.642340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.642357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.642382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.642407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.642422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.642436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.642928] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.652529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.652851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.652886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.652904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.652957] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.652986] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.653002] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.653015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.653462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.667304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.667695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.667730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.667748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.667954] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.668012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.668035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.668050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.668081] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.681956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.682139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.682170] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.682189] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.682434] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.682497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.682519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.682533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.682728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.697279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.697527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.697557] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.697575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.697601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.697643] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.697663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.697688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.697714] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.707870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.708161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.708192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.708210] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.708319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.708443] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.708465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.708479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.711583] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.717975] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.718126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.718155] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.718177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.718204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.718228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.320 [2024-10-07 13:36:30.718244] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.320 [2024-10-07 13:36:30.718258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.320 [2024-10-07 13:36:30.718282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.320 [2024-10-07 13:36:30.728060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.320 [2024-10-07 13:36:30.728275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.320 [2024-10-07 13:36:30.728305] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.320 [2024-10-07 13:36:30.728322] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.320 [2024-10-07 13:36:30.728348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.320 [2024-10-07 13:36:30.728372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.728387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.728401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.728426] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.740938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.741222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.741254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.741272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.743066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.743800] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.743825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.743840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.744130] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.751193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.751391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.751421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.751440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.751852] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.751906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.751926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.751940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.751965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.762778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.762901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.762931] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.762948] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.762988] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.763020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.763036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.763050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.763075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.774546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.774871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.774903] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.774921] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.775127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.775193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.775214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.775229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.775254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.784821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.785046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.785077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.785095] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.785202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.785325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.785347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.785360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.789450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.794909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.795036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.795066] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.795083] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.795109] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.795133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.795149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.795163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.795187] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.804995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.806077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.806110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.806139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.806164] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.806187] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.806201] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.806214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.806237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.820433] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.820626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.820658] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.820711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.820741] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.820765] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.820781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.820793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.820819] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.832312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.832567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.832598] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.832622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.832759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.832873] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.832895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.832908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.836039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.842401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.842608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.842637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.842662] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.842698] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.842734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.842748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.842762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.842786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.852956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.853107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.853138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.853155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.853340] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.853414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.853451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.853466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.853491] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.868442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.868606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.868636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.868654] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.868689] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.868740] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.868765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.868779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.868805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.880516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.880775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.880806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.880825] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.880933] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.883078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.883106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.883126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.883738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.891825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.892009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.892039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.892057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.892083] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.892108] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.892124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.892137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.892161] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.902681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.902903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.902934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.902952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.903060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.905015] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.905042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.905062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.905160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.912773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.912931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.912960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.912977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.913176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.321 [2024-10-07 13:36:30.913247] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.321 [2024-10-07 13:36:30.913268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.321 [2024-10-07 13:36:30.913297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.321 [2024-10-07 13:36:30.913323] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.321 [2024-10-07 13:36:30.926930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.321 [2024-10-07 13:36:30.927174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.321 [2024-10-07 13:36:30.927206] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.321 [2024-10-07 13:36:30.927223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.321 [2024-10-07 13:36:30.927249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.927274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:30.927289] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:30.927302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:30.927328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:30.941490] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.941645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:30.941686] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:30.941707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:30.941733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.941782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:30.941802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:30.941815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:30.941840] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:30.954587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.954709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:30.954740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:30.954758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:30.954790] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.954815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:30.954831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:30.954844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:30.954868] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:30.969257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.969400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:30.969430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:30.969448] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:30.969474] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.969499] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:30.969514] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:30.969528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:30.969552] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:30.984202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.984573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:30.984606] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:30.984625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:30.984684] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.984713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:30.984728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:30.984741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:30.984925] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:30.998932] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d13cb0 was disconnected and freed. reset controller. 00:25:56.322 [2024-10-07 13:36:30.998968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.999379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:30.999481] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:30.999584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:30.999614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:30.999631] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.000309] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.000656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.000694] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.000712] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.000727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.000740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.000753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.000960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.000990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.001039] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.001060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.001074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.001099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.013581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.014362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.014519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.014550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.014568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.015031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.015061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.015078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.015097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.015316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.015344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.015358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.015371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.015592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.015618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.015633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.015647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.015713] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.029864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.029898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.030024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.030054] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.030071] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.030181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.030207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.030224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.030249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.030271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.030292] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.030307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.030321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.030338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.030352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.030365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.030390] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.030407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.043129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.043182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.043778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.043809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.043833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.043946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.043972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.043988] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.044205] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.044234] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.044433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.044463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.044478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.044496] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.044510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.044523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.044588] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.044622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.053277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.055384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.055498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.055528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.055545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.060044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.060077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.060096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.060115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.060204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.060229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.060243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.060256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.060282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.060300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.060313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.060326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.060349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.063361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.063567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.063595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.063612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.064965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.065364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.065393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.065407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.065557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.065820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.066017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.066046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.066063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.067121] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.067269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.067293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.067308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.067334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.075999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.076371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.076403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.076421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.076628] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.076676] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.076979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.077019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.077036] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.077050] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.077073] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.077085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.077137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.077162] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.077185] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.077200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.077213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.077237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.088625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.088659] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.322 [2024-10-07 13:36:31.088962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.089008] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.322 [2024-10-07 13:36:31.089025] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.089106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.322 [2024-10-07 13:36:31.089132] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.322 [2024-10-07 13:36:31.089148] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.322 [2024-10-07 13:36:31.089211] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.089238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.322 [2024-10-07 13:36:31.089261] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.089276] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.089289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.089306] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.322 [2024-10-07 13:36:31.089320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.322 [2024-10-07 13:36:31.089333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.322 [2024-10-07 13:36:31.089359] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.322 [2024-10-07 13:36:31.089375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.101767] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.101801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.102565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.102612] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.102630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.102752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.102780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.102796] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.102870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.102898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.102920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.102935] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.102955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.102973] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.102987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.103000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.103024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.103041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.114467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.114500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.114718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.114749] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.114767] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.114869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.114896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.114912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.115561] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.115590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.116658] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.116708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.116730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.116747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.116762] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.116774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.117258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.117282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.124759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.124791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.125123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.125154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.125172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.125276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.125303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.125325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.125452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.125480] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.125524] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.125544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.125559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.125576] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.125590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.125603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.125643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.125659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.136534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.136581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.136809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.136840] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.136858] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.136932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.136959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.136976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.137001] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.137023] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.137044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.137059] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.137073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.137090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.137104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.137117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.137142] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.137158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.148418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.148458] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.148576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.148607] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.148624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.148736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.148763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.148780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.148914] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.148943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.149058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.149082] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.149097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.149114] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.149129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.149141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.149266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.149288] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.159518] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.159553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.159726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.159757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.159775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.159855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.159882] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.159898] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.160157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.160185] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.160401] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.160425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.160440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.160463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.160478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.160492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.160574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.160612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.173809] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.173843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.174471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.174502] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.174520] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.174608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.174633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.174648] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.175038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.175088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.175163] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.175182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.175196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.175214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.175229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.175242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.175424] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.175462] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.185003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.185036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.185332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.185362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.185380] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.185469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.185497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.185514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.187756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.187788] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.188193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.188232] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.188245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.188265] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.188293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.188306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.189286] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.189325] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.195129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.195175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.195357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.195386] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.195403] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.195519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.195554] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.323 [2024-10-07 13:36:31.195571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.195590] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.195616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.195635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.195648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.195661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.199647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.199683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.199700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.199713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.323 [2024-10-07 13:36:31.199910] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.323 [2024-10-07 13:36:31.205216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.205443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.323 [2024-10-07 13:36:31.205483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.323 [2024-10-07 13:36:31.205503] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.323 [2024-10-07 13:36:31.205730] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.323 [2024-10-07 13:36:31.205790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.323 [2024-10-07 13:36:31.205825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.323 [2024-10-07 13:36:31.205842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.323 [2024-10-07 13:36:31.205855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.205880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.205977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.206005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.206028] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.206054] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.206078] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.206094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.206107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.206131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.216987] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.217291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.217451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.217481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.217499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.217804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.217834] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.217851] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.217870] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.217922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.217944] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.217957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.217970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.217995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.218018] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.218031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.218044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.218571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.228053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.228087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.228401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.228431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.228449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.228560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.228587] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.228603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.230876] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.230908] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.231704] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.231729] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.231750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.231783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.231797] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.231811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.232458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.232483] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.238171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.238215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.238414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.238444] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.238461] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.238545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.238573] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.238590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.238608] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.238643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.238662] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.238689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.238703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.238728] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.238746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.238759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.238772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.238810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.248268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.248419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.248450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.248468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.248620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.248660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.248945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.248975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.248993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.249008] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.249020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.249034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.249201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.249244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.249304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.249325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.249339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.249379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.261589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.261621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.261801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.261836] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.261854] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.261967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.261993] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.262009] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.262034] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.262056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.262077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.262093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.262107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.262123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.262137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.262150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.262175] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.262192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.277943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.277976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.278185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.278215] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.278233] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.278342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.278368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.278384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.278410] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.278432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.278452] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.278467] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.278481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.278498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.278513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.278531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.278557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.278574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.292716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.292750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.292858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.292888] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.292906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.293017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.293044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.293060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.293086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.293108] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.293129] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.293144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.293157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.293174] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.293189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.293202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.293227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.293243] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.308363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.308412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.308943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.308975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.308992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.309164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.309191] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.309207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.309462] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.309498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.309550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.309572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.309586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.309603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.309618] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.309630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.309655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.309682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.319992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.320025] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.320251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.320281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.320299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.320406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.320433] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.320449] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.322305] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.322338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.323183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.323207] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.323228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.323245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.323260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.323272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.323549] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.323574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.330104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.330149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.324 [2024-10-07 13:36:31.330354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.330383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.324 [2024-10-07 13:36:31.330406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.330523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.324 [2024-10-07 13:36:31.330551] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.324 [2024-10-07 13:36:31.330567] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.324 [2024-10-07 13:36:31.330587] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.330613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.324 [2024-10-07 13:36:31.330631] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.330644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.324 [2024-10-07 13:36:31.330657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.324 [2024-10-07 13:36:31.330692] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.324 [2024-10-07 13:36:31.330711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.324 [2024-10-07 13:36:31.330724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.330737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.330760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.340188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.340338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.340369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.340387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.340600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.340686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.340735] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.340752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.340767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.340791] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.340905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.340933] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.340950] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.340975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.340999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.341014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.341033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.341059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.352799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.352832] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.352974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.353004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.353022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.353102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.353129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.353145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.353171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.353193] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.353214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.353229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.353243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.353260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.353275] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.353287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.353313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.353330] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.362928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.362990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.363154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.363183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.363201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.363341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.363368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.363384] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.363402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.366062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.366096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.366112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.366125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.366332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.366373] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.366387] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.366415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.366551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.373083] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.373131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.373256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.373285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.373303] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.373644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.373692] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.373709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.373729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.373781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.373804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.373817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.373830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.373855] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.373872] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.373885] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.373898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.373920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.387200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.387234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.387552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.387585] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.387603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.387728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.387755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.387772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.387977] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.388009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.388057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.388078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.388092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.388109] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.388124] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.388137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.388162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.388179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.402214] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.402247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.402609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.402642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.402660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.402769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.402795] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.402811] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.403035] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.403067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.403118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.403138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.403152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.403170] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.403184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.403197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.403379] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.403423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.418544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.418578] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.419132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.419166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.419198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.419345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.419371] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.419387] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.419607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.419635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.419848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.419875] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.419890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.419908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.419923] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.419936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.420138] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.420162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.433329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.433362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.433496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.433525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.433542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.433624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.433650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.433674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.433703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.433726] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.433747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.433767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.433786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.433804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.433819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.433831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.433856] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.433872] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.444304] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.444338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.444449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.444477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.444494] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.444599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.444625] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.444641] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.447327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.447359] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.325 [2024-10-07 13:36:31.449296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.449323] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.449338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.449355] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.325 [2024-10-07 13:36:31.449369] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.325 [2024-10-07 13:36:31.449383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.325 [2024-10-07 13:36:31.450201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.450228] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.325 [2024-10-07 13:36:31.454707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.454740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.325 [2024-10-07 13:36:31.455077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.455108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.325 [2024-10-07 13:36:31.455126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.455238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.325 [2024-10-07 13:36:31.455265] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.325 [2024-10-07 13:36:31.455281] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.325 [2024-10-07 13:36:31.455399] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.455426] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.455465] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.455484] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.455498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.455515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.455531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.455544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.455568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.455585] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.464890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.464922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.465035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.465063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.465081] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.465157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.465183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.465199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.465454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.465483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.465629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.465652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.465676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.465697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.465712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.465725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.465751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.465768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.479167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.479199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.479866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.479898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.479915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.480030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.480055] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.480070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.480457] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.480487] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.480560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.480596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.480611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.480629] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.480644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.480657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.480851] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.480875] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.493572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.493605] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.493755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.493784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.493801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.493881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.493908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.493924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.493950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.493971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.493993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.494008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.494027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.494044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.494058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.494072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.494097] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.494113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.503701] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.503733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.503861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.503889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.503906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.504038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.504064] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.504080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.506966] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.506999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.509920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.509947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.509962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.509980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.509995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.510007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.510885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.510912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.513814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.513860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.513994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.514022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.514038] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.514243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.514270] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.514297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.514318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.514500] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.514525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.514539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.514553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.514602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.514623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.514636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.514650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.514682] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.524148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.524181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.526403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.526436] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.526454] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.526595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.526621] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.526636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.527496] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.527526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.527968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.527995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.528009] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.528043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.528058] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.528072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.528319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.528343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.534301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.534339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.534535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.534563] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.534581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.534724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.534751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.534768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.535124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.535153] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.535177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.535192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.535205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.535222] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.535235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.535250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.535274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.535290] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.544683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.544716] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.544826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.544854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.544871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.544946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.544973] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.544989] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.545175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.545218] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.545279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.545300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.545314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.545336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.545352] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.545365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.545548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.545571] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.557373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.557406] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.557518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.557547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.557564] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.557674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.557701] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.557717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.557742] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.557765] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.557786] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.557801] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.557814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.557831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.557845] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.557858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.557883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.557899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.326 [2024-10-07 13:36:31.571090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.571125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.326 [2024-10-07 13:36:31.572594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.572626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.326 [2024-10-07 13:36:31.572644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.572790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.326 [2024-10-07 13:36:31.572817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.326 [2024-10-07 13:36:31.572838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.326 [2024-10-07 13:36:31.573286] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.573317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.326 [2024-10-07 13:36:31.573396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.573432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.573446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.326 [2024-10-07 13:36:31.573464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.326 [2024-10-07 13:36:31.573478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.326 [2024-10-07 13:36:31.573492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.573517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.573534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.581222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.581285] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.581446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.581474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.581491] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.581582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.581608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.581624] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.581643] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.581678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.581699] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.581712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.581725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.581751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.581769] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.581781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.581794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.581816] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.592071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.592104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.592227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.592257] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.592273] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.592371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.592411] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.592428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.592454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.592475] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.592497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.592512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.592527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.592543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.592558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.592572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.592612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.592629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.603841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.603875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.604017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.604046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.604063] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.604141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.604167] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.604183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.604209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.604230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.604252] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.604267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.604280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.604298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.604318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.604332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.604358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.604375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.618973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.619008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.620386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.620418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.620436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.620515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.620540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.620556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.621141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.621173] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.621426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.621452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.621467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.621485] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.621500] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.621514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.621726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.621750] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 8441.10 IOPS, 32.97 MiB/s [2024-10-07T11:36:38.039Z] [2024-10-07 13:36:31.629089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.629153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.629251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.629279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.629296] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.632016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.632049] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.632067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.632092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.634981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.635025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.635039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.635052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.636156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.636198] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.636212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.636226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.636835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.639410] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.639457] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.639590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.639618] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.639634] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.639787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.639813] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.639829] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.639848] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.639874] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.639892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.639906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.639920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.639945] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.639962] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.639975] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.639988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.640012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.651786] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.651820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.651968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.651998] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.652016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.652126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.652152] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.652168] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.652194] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.652216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.652237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.652253] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.652266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.652283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.652297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.652310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.652351] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.652367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.662978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.663012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.664941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.664974] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.664993] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.665132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.665158] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.665174] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.667379] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.667412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.668360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.668386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.668399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.668417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.668437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.668451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.668730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.668754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.673256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.673302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.673484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.673512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.673530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.673636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.673662] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.673688] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.673714] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.673736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.673756] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.673771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.673785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.673802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.327 [2024-10-07 13:36:31.673816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.327 [2024-10-07 13:36:31.673828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.327 [2024-10-07 13:36:31.673854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.673870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.327 [2024-10-07 13:36:31.683429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.683462] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.327 [2024-10-07 13:36:31.683605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.683634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.327 [2024-10-07 13:36:31.683650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.683737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.327 [2024-10-07 13:36:31.683764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.327 [2024-10-07 13:36:31.683780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.327 [2024-10-07 13:36:31.683965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.327 [2024-10-07 13:36:31.684015] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.684080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.684101] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.684115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.684133] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.684147] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.684161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.684186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.684203] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.695844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.695878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.696195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.696228] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.696246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.696392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.696418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.696435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.696909] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.696942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.697176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.697202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.697217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.697235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.697250] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.697263] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.697329] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.697349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.707305] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.707338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.707582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.707616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.707635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.707725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.707751] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.707768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.709824] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.709856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.710569] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.710595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.710609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.710627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.710641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.710654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.711157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.711182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.717420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.717465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.717644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.717679] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.717698] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.717814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.717841] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.717856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.717875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.720092] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.720122] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.720136] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.720149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.720342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.720366] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.720385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.720399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.720511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.727840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.727873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.728037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.728065] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.728082] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.728164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.728190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.728206] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.728403] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.728446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.728525] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.728547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.728562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.728579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.728594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.728607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.728632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.728649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.740779] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.740813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.741114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.741146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.741164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.741382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.741408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.741424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.741628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.741674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.741727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.741748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.741762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.741780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.741795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.741807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.742004] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.742027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.754192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.754225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.754614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.754646] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.754671] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.754792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.754818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.754834] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.755556] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.755586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.755856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.755879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.755893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.755912] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.755927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.755941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.756144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.756167] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.764339] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.764387] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.764516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.764561] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.764584] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.764713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.764742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.764759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.764779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.767530] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.767560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.767575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.767588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.768684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.768711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.768726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.768740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.769104] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.774424] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.774599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.774628] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.774644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.774690] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.774740] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.774773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.774790] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.774803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.774827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.774991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.775018] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.775034] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.775059] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.775083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.775099] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.775119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.775252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.784632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.784785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.784814] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.784831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.784856] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.784908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.784930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.784944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.784972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.784992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.785202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.785229] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.785246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.785271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.785295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.785310] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.785324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.785348] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.328 [2024-10-07 13:36:31.795784] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.795818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.328 [2024-10-07 13:36:31.796128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.796159] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.328 [2024-10-07 13:36:31.796177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.796313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.328 [2024-10-07 13:36:31.796340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.328 [2024-10-07 13:36:31.796357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.328 [2024-10-07 13:36:31.796466] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.796492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.328 [2024-10-07 13:36:31.797528] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.797554] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.797568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.797585] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.328 [2024-10-07 13:36:31.797599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.328 [2024-10-07 13:36:31.797611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.328 [2024-10-07 13:36:31.799502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.799529] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.807688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.807721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.807961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.807990] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.808007] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.808087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.808113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.808129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.808266] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.808295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.808410] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.808432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.808447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.808464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.808479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.808492] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.808599] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.808634] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.817802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.817851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.817983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.818011] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.818029] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.818122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.818149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.818165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.818184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.818210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.818229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.818242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.818255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.818280] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.818298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.818311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.818324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.818346] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.829167] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.829202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.829391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.829420] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.829437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.829545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.829571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.829587] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.829783] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.829827] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.829891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.829912] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.829926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.829943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.829959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.829972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.830153] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.830182] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.841704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.841738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.841960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.841989] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.842006] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.842088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.842113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.842129] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.843595] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.843628] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.844256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.844281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.844295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.844311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.844324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.844336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.844603] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.844627] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.851821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.851868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.852012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.852042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.852059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.852173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.852200] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.852217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.852235] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.852261] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.852279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.852298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.852311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.852337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.852354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.852367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.852380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.852402] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.861909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.862022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.862050] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.862069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.862267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.862362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.862397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.862414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.862429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.862453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.862568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.862595] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.862612] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.862637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.862661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.862688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.862702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.862726] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.875965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.876000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.876342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.876373] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.876390] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.876503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.876534] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.876551] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.876768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.876799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.876848] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.876867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.876882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.876899] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.876913] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.876926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.877108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.877134] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.890841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.890874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.891009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.891039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.891056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.891139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.891166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.891183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.891208] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.891228] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.891250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.891266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.891280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.891298] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.891312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.891325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.891350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.891366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.901479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.901512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.904362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.904394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.904412] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.904495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.904520] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.904540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.906104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.906137] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.906194] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.906214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.906228] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.906245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.906260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.329 [2024-10-07 13:36:31.906273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.329 [2024-10-07 13:36:31.906298] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.906315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.329 [2024-10-07 13:36:31.913573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.913606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.329 [2024-10-07 13:36:31.913812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.913844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.329 [2024-10-07 13:36:31.913861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.913999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.329 [2024-10-07 13:36:31.914025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.329 [2024-10-07 13:36:31.914042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.329 [2024-10-07 13:36:31.914149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.914176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.329 [2024-10-07 13:36:31.914294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.329 [2024-10-07 13:36:31.914315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.914334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.914352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.914366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.914378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.914481] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.914502] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.924153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.924187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.924291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.924319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.924336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.924421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.924447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.924463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.924488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.924510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.924531] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.924546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.924560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.924577] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.924592] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.924604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.924629] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.924646] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.935273] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.935308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.935472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.935503] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.935521] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.935626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.935654] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.935687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.935873] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.935917] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.935981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.936003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.936017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.936034] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.936049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.936062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.936244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.936269] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.948500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.948534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.948720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.948752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.948769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.948880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.948908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.948925] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.949769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.949799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.950217] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.950242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.950256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.950274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.950288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.950301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.950519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.950544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.959045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.959084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.959301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.959332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.959349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.959436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.959461] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.959478] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.959586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.959614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.959728] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.959751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.959765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.959782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.959796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.959809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.963058] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.963085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.969500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.969533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.969646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.969684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.969703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.969820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.969846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.969861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.969886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.969907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.969928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.969943] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.969956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.969979] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.970000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.970013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.970038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.970054] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.979821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.979853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.980145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.980176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.980194] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.980303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.980329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.980346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.980551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.980579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.980627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.980647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.980661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.980688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.980703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.980717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.980900] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.980924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.994279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.994313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:31.994694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.994734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:31.994751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.994856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:31.994883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:31.994904] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:31.995329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.995374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:31.995605] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.995630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.995645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.995664] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:31.995691] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:31.995704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:31.995763] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:31.995783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:32.004626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.004684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.004946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.004978] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:32.004996] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.005075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.005102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:32.005118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.006943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.006975] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.008988] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:32.009014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:32.009029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:32.009047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:32.009061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:32.009075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:32.009381] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:32.009407] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:32.014901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.014932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.015087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.015116] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:32.015133] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.015233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.015258] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:32.015274] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.015299] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.015321] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.015343] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:32.015357] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:32.015371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:32.015388] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:32.015403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:32.015416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:32.015441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:32.015472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.330 [2024-10-07 13:36:32.025028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.025061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.330 [2024-10-07 13:36:32.025169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.025197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.330 [2024-10-07 13:36:32.025214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.025324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.330 [2024-10-07 13:36:32.025350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.330 [2024-10-07 13:36:32.025366] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.330 [2024-10-07 13:36:32.025391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.025412] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.330 [2024-10-07 13:36:32.025433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.330 [2024-10-07 13:36:32.025448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.330 [2024-10-07 13:36:32.025461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.330 [2024-10-07 13:36:32.025478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.025498] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.025512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.025536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.025553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.039001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.039036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.039599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.039632] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.039649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.039747] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.039773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.039790] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.040009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.040038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.040238] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.040261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.040275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.040294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.040308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.040322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.040553] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.040577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.049311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.049344] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.049683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.049716] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.049734] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.049841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.049867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.049883] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.054649] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.054691] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.055487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.055512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.055525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.055557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.055572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.055585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.056079] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.056103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.059747] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.059778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.060092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.060124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.060142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.060252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.060278] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.060294] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.060343] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.060368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.060390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.060406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.060419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.060436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.060451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.060464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.060488] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.060519] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.070443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.070477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.070645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.070687] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.070707] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.070791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.070816] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.070832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.071032] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.071061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.071123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.071142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.071170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.071189] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.071204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.071217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.071400] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.071423] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.084381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.084414] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.084973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.085005] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.085023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.085103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.085129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.085145] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.085361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.085390] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.085438] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.085459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.085473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.085490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.085505] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.085528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.085776] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.085800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.095537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.095570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.095800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.095830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.095847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.095954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.095981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.095997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.096104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.096132] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.098552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.098579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.098594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.098611] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.098626] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.098640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.099131] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.099170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.105674] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.105721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.105850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.105879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.105895] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.106042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.106078] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.106094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.106112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.106156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.106175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.106188] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.106201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.106226] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.106243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.106256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.106269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.106292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.115760] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.115936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.115965] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.115982] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.116021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.116054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.116083] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.116100] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.116114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.116137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.116235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.116261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.116276] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.116301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.116325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.116339] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.116352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.116375] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.126651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.126691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.126843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.126871] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.126893] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.126982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.127007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.127023] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.127171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.127200] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.127345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.127367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.127381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.127399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.127414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.127427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.127547] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.127569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.137270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.137301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.137470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.137499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.331 [2024-10-07 13:36:32.137517] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.137657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.137691] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.331 [2024-10-07 13:36:32.137709] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.331 [2024-10-07 13:36:32.139797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.139829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.331 [2024-10-07 13:36:32.141336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.141363] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.141378] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.141396] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.331 [2024-10-07 13:36:32.141410] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.331 [2024-10-07 13:36:32.141429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.331 [2024-10-07 13:36:32.141651] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.141685] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.331 [2024-10-07 13:36:32.147395] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.147441] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.331 [2024-10-07 13:36:32.149523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.331 [2024-10-07 13:36:32.149555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.149573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.149690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.149717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.149733] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.154106] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.154141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.154495] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.154537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.154553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.154572] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.154588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.154601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.154751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.154774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.157520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.157564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.157786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.157815] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.157833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.157955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.157981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.157998] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.158017] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.158043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.158067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.158081] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.158094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.158119] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.158137] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.158165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.158179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.158233] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.169108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.169141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.170284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.170316] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.170333] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.170452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.170477] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.170493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.171081] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.171127] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.171367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.171393] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.171408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.171426] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.171441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.171454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.171658] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.171691] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.179594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.179627] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.179846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.179876] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.179899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.179991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.180019] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.180035] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.180144] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.180172] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.180305] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.180327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.180340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.180357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.180372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.180399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.180513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.180534] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.190060] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.190094] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.190268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.190297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.190315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.190423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.190450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.190467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.190494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.190515] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.190536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.190551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.190564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.190581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.190596] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.190609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.190640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.190657] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.200174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.200222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.200380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.200408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.200425] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.201109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.201140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.201156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.201176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.202334] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.202362] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.202377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.202390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.202806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.202832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.202846] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.202860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.202939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.210261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.210380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.210410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.210428] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.210467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.210498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.210527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.210543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.210557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.210581] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.210725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.210758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.210775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.210802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.210826] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.210840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.210854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.210878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.222528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.222562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.222782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.222812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.222831] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.222912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.222938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.222953] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.223021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.223065] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.223117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.223138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.223151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.223169] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.223184] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.223198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.223223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.223240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.232656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.232715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.232820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.232849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.232866] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.235536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.235568] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.235586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.235605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.236592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.236620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.236634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.236661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.236913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.236937] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.236951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.236965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.332 [2024-10-07 13:36:32.237725] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.332 [2024-10-07 13:36:32.242878] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.242911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.332 [2024-10-07 13:36:32.243014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.243042] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.332 [2024-10-07 13:36:32.243059] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.243136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.332 [2024-10-07 13:36:32.243162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.332 [2024-10-07 13:36:32.243177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.332 [2024-10-07 13:36:32.243202] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.243224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.332 [2024-10-07 13:36:32.243245] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.332 [2024-10-07 13:36:32.243260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.332 [2024-10-07 13:36:32.243274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.243291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.243306] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.243319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.243344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.243366] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.253005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.253037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.253272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.253301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.253318] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.253399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.253425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.253442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.253467] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.253488] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.253510] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.253525] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.253538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.253556] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.253571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.253584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.253609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.253626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.265140] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.265174] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.265520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.265552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.265570] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.265681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.265708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.265726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.265930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.265959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.266007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.266026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.266047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.266065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.266079] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.266093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.266289] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.266313] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.280712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.280746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.281104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.281137] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.281155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.281263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.281289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.281305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.281510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.281539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.281588] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.281607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.281621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.281638] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.281653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.281674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.281860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.281883] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.295317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.295351] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.295908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.295940] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.295958] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.296041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.296071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.296088] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.296371] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.296400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.296463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.296497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.296513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.296530] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.296545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.296558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.296769] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.296794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.309841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.309876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.310454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.310486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.310504] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.310645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.310678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.310697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.311072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.311119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.311348] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.311374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.311389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.311407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.311422] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.311435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.311500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.311521] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.321589] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.321622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.321889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.321919] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.321936] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.322018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.322044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.322060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.322201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.322230] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.322379] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.322402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.322416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.322434] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.322448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.322461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.326207] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.326236] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.331703] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.331750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.331916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.331944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.331961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.332084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.332110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.332126] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.332145] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.333254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.333282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.333295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.333313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.333514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.333539] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.333552] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.333565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.333680] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.341896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.341929] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.342068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.342096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.342113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.342249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.342275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.342291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.342317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.342338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.342359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.342375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.342389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.342406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.342420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.342432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.342456] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.342473] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.354764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.354813] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.355940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.355972] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.355990] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.356069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.356094] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.356115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.356611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.356642] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.356894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.356918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.356933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.356951] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.356966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.356978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.357030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.357052] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.364911] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.366940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.367053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.367082] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.333 [2024-10-07 13:36:32.367099] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.368079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.368124] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.333 [2024-10-07 13:36:32.368141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.333 [2024-10-07 13:36:32.368161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.368519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.333 [2024-10-07 13:36:32.368546] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.368575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.368588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.369826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.369852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.333 [2024-10-07 13:36:32.369865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.333 [2024-10-07 13:36:32.369879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.333 [2024-10-07 13:36:32.370012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.333 [2024-10-07 13:36:32.375011] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.333 [2024-10-07 13:36:32.375212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.333 [2024-10-07 13:36:32.375242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.375260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.375285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.375309] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.375324] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.375339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.375364] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.377404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.377555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.377584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.377601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.377626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.377650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.377674] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.377689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.377722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.385104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.385278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.385308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.385327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.385353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.385377] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.385392] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.385406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.385431] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.387485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.387631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.387660] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.387687] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.387719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.387743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.387758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.387772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.387796] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.398707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.398788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.398927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.398957] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.398975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.399096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.399123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.399139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.399158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.399184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.399202] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.399216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.399229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.399255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.399272] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.399285] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.399298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.399335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.412620] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.412653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.412774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.412804] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.412822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.412933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.412960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.412976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.413007] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.413030] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.413051] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.413066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.413079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.413096] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.413111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.413123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.413148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.413164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.427827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.427861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.428043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.428073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.428091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.428176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.428204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.428220] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.429300] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.429345] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.430002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.430041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.430055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.430072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.430086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.430099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.430394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.430420] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.439041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.439075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.439307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.439338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.439355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.439466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.439493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.439509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.439627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.439655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.442830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.442856] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.442871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.442888] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.442902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.442915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.443880] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.443905] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.449310] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.449340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.449484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.449513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.449531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.449643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.449680] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.449699] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.449725] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.449746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.449768] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.449782] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.449796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.449812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.449832] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.449845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.449871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.449888] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.459476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.459509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.459858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.459890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.459908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.460021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.460048] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.460064] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.460269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.460298] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.460347] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.460367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.460381] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.460398] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.460413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.460426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.460608] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.460632] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.473501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.473535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.474055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.474087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.474104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.474224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.474249] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.474266] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.474656] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.474703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.474920] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.474944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.474958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.474976] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.474990] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.475004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.475068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.475088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.484153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.484187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.334 [2024-10-07 13:36:32.484449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.484480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.334 [2024-10-07 13:36:32.484498] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.484577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.334 [2024-10-07 13:36:32.484605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.334 [2024-10-07 13:36:32.484622] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.334 [2024-10-07 13:36:32.484739] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.484767] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.334 [2024-10-07 13:36:32.484886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.484909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.484923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.484941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.334 [2024-10-07 13:36:32.484955] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.334 [2024-10-07 13:36:32.484968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.334 [2024-10-07 13:36:32.485088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.334 [2024-10-07 13:36:32.485111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.494265] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.494311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.494474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.494510] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.494529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.494610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.494637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.494653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.494680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.494927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.494968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.494983] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.494995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.495158] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.495184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.495198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.495211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.495317] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.505404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.505438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.505578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.505608] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.505625] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.505728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.505756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.505773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.505799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.505820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.505857] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.505877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.505891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.505908] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.505922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.505940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.505982] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.505998] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.518465] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.518497] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.519156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.519187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.519205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.519292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.519319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.519335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.519570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.519600] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.519648] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.519675] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.519692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.519709] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.519724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.519737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.519777] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.519797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.533301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.533334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.533832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.533863] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.533881] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.534017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.534044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.534061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.534278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.534314] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.534364] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.534385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.534399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.534416] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.534432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.534445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.534712] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.534736] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.548943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.548991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.549351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.549382] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.549399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.549487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.549512] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.549528] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.549897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.549927] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.550001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.550022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.550036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.550054] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.550068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.550080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.550263] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.550301] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.564879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.564913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.565273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.565304] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.565328] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.565409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.565435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.565451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.565655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.565694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.565744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.565764] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.565777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.565795] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.565809] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.565823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.566006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.566030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.579378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.579411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.579839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.579870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.579888] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.579975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.580000] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.580016] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.580223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.580252] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.580301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.580321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.580335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.580353] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.580367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.580380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.580630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.580655] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.593728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.593762] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.594055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.594086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.594104] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.594213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.594240] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.594256] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.594756] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.594786] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.595020] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.595044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.595059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.595077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.595091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.595103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.595307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.595331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.605292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.605325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.609773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.609806] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.609824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.609914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.609939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.609955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.610664] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.610706] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.610980] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.611004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.611018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.611036] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.611052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.611065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.611267] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.611292] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.615404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.615448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.615677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.615706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.615723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.615818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.615845] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.335 [2024-10-07 13:36:32.615862] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.615881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.615907] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.615926] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.615939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.615952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.615976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.615994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.335 [2024-10-07 13:36:32.616006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.335 [2024-10-07 13:36:32.616020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.335 [2024-10-07 13:36:32.616043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.335 [2024-10-07 13:36:32.625483] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.335 [2024-10-07 13:36:32.625662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.335 [2024-10-07 13:36:32.625699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.335 [2024-10-07 13:36:32.625717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.335 [2024-10-07 13:36:32.625754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.335 [2024-10-07 13:36:32.625791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.625821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.625838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.625853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.625877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.626035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.626063] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.626080] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.626105] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 8461.36 IOPS, 33.05 MiB/s [2024-10-07T11:36:38.048Z] [2024-10-07 13:36:32.627713] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.627733] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.627746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.627770] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.637647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.637688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.637886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.637916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.637933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.638044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.638071] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.638087] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.638195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.638223] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.638341] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.638362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.638389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.638406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.638420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.638432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.638538] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.638557] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.648012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.648046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.648391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.648423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.648440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.648524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.648550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.648566] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.648699] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.648729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.648832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.648853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.648867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.648885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.648900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.648928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.649045] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.649065] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.658190] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.658223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.658386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.658416] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.658433] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.658543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.658570] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.658586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.658922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.658953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.659192] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.659222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.659238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.659255] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.659269] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.659282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.659334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.659355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.672290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.672323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.672636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.672676] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.672696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.672833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.672861] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.672877] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.673157] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.673187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.673420] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.673444] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.673458] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.673476] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.673491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.673504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.673719] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.673744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.683282] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.683315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.683607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.683637] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.683655] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.683789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.683817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.683833] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.683942] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.683970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.684087] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.684110] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.684123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.684140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.684154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.684166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.687574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.687601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.693396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.693442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.693608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.693636] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.693653] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.693776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.693803] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.693819] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.693837] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.693863] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.693881] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.693895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.693908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.693932] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.693949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.693962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.693975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.694018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.703564] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.703598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.703716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.703745] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.703762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.703870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.703897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.703914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.703939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.703960] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.703981] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.703996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.704010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.704026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.704041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.704053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.704077] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.704093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.713717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.713749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.713899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.713926] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.713943] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.714138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.714166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.714183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.718075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.718107] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.718627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.718652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.718680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.718700] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.718715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.718727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.718806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.718827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.724954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.724986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.336 [2024-10-07 13:36:32.725176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.725207] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.336 [2024-10-07 13:36:32.725224] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.725335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.336 [2024-10-07 13:36:32.725362] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.336 [2024-10-07 13:36:32.725379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.336 [2024-10-07 13:36:32.725478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.725506] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.336 [2024-10-07 13:36:32.725529] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.725544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.725558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.725575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.336 [2024-10-07 13:36:32.725590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.336 [2024-10-07 13:36:32.725602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.336 [2024-10-07 13:36:32.725626] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.336 [2024-10-07 13:36:32.725642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.735076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.735108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.735272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.735302] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.735319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.735430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.735463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.735480] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.735674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.735705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.735754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.735774] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.735787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.735805] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.735820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.735833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.736016] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.736056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.749819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.749854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.750343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.750374] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.750392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.750502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.750527] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.750543] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.750757] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.750787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.751302] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.751326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.751339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.751357] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.751371] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.751383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.751615] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.751640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.760851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.760884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.761101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.761131] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.761149] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.761226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.761254] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.761270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.764407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.764438] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.765281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.765320] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.765334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.765351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.765365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.765377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.765835] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.765861] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.770966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.771014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.771170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.771199] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.771216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.771329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.771356] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.771372] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.771391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.771418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.771436] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.771449] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.771467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.771493] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.771511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.771523] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.771551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.771574] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.781106] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.781155] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.781313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.781342] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.781359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.781501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.781528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.781545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.781563] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.781589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.781607] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.781621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.781634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.781659] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.781690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.781705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.781718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.781978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.795382] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.795417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.795692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.795724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.795741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.795844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.795872] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.795894] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.796227] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.796256] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.796840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.796865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.796879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.796897] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.796911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.796924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.797148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.797172] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.809008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.809042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.809395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.809426] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.809443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.809528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.809555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.809572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.810055] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.810085] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.810392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.810417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.810431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.810449] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.810480] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.810493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.810730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.810755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.823798] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.823837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.824741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.824773] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.824791] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.824874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.824899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.824916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.825303] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.825348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.825650] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.825690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.825707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.825726] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.825741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.825754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.825963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.825988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.835989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.836022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.836283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.836314] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.836331] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.836408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.836434] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.836450] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.836559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.836586] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.836714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.836736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.836750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.836772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.836787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.836800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.838154] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.838179] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.846103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.846148] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.846284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.846313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.846330] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.846421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.846447] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.337 [2024-10-07 13:36:32.846463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.846482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.846508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.846526] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.846539] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.846553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.846577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.846595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.846607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.337 [2024-10-07 13:36:32.846620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.337 [2024-10-07 13:36:32.846641] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.337 [2024-10-07 13:36:32.856188] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.856318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.337 [2024-10-07 13:36:32.856349] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.337 [2024-10-07 13:36:32.856367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.337 [2024-10-07 13:36:32.856405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.337 [2024-10-07 13:36:32.856437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.337 [2024-10-07 13:36:32.856466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.337 [2024-10-07 13:36:32.856482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.856501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.856526] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.856735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.856764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.856780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.858114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.858722] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.858747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.858761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.859129] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.867930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.867963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.868310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.868341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.868359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.868472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.868498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.868514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.868637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.868674] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.868782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.868805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.868818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.868837] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.868853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.868866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.871306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.871332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.878170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.878202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.878699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.878730] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.878747] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.878849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.878874] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.878890] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.879177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.879203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.879225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.879239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.879252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.879267] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.879281] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.879293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.879316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.879332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.888652] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.888708] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.889101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.889133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.889150] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.889257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.889283] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.889299] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.889509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.889539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.889750] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.889775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.889789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.889807] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.889827] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.889841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.889907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.889943] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.902824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.902857] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.902967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.902994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.903011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.903120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.903147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.903164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.903189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.903210] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.903231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.903245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.903259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.903275] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.903290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.903303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.903327] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.903344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.917057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.917090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.917257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.917286] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.917304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.917386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.917413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.917429] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.917455] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.917482] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.917505] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.917520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.917533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.917550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.917564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.917577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.917601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.917617] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.927256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.927289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.927426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.927455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.927472] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.927583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.927610] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.927626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.927651] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.927682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.927706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.927721] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.927734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.927751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.927766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.927779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.928472] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.928496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.937367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.937412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.937606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.937640] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.937658] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.939109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.939139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.939156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.939175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.940166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.940193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.940206] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.940218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.941041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.941066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.941080] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.941092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.941180] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.951311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.951345] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.951885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.951916] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.951933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.952037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.952062] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.952078] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.952141] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.952166] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.952295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.952318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.952333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.952351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.952366] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.952384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.952592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.952618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.962088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.962122] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.962437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.962469] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.338 [2024-10-07 13:36:32.962486] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.962562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.338 [2024-10-07 13:36:32.962589] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.338 [2024-10-07 13:36:32.962606] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.338 [2024-10-07 13:36:32.962723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.962752] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.338 [2024-10-07 13:36:32.964619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.964645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.964659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.964688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.338 [2024-10-07 13:36:32.964704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.338 [2024-10-07 13:36:32.964716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.338 [2024-10-07 13:36:32.965561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.965586] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.338 [2024-10-07 13:36:32.973435] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.973468] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.338 [2024-10-07 13:36:32.973634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.973664] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:32.973692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.973774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.973802] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:32.973818] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.974516] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.974546] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.974741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.974766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.974781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.974798] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.974813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.974826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.974933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:32.974955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:32.985035] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:32.985068] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:32.985272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.985303] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:32.985320] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.985430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.985458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:32.985475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.985582] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.985610] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.985771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.985795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.985810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.985827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.985842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.985856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.986882] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:32.986908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:32.995153] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:32.995202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:32.995384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.995413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:32.995436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.995529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:32.995556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:32.995573] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:32.995592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.995617] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:32.995636] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.995664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.995688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.995729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:32.995747] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:32.995759] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:32.995772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:32.995810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.009635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.009676] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.009816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.009847] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.009864] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.009941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.009969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.009986] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.010011] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.010033] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.010055] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.010070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.010084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.010101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.010116] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.010129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.010159] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.010196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.025503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.025536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.025646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.025688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.025706] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.025797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.025824] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.025841] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.025866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.025888] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.025909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.025924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.025938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.025955] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.025970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.025983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.026018] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.026049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.037326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.037361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.040411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.040445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.040463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.040549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.040574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.040590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.041606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.041637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.042146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.042171] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.042199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.042219] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.042233] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.042245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.042510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.042535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.047471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.047501] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.047687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.047718] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.047735] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.047856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.047883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.047899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.047925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.047946] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.047967] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.047981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.047994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.048011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.048026] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.048041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.048066] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.048082] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.057957] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.057999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.058104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.058134] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.058152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.058313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.058340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.058357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.058559] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.058589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.058721] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.058745] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.058760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.058778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.058793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.058806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.058967] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.059006] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.072131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.072164] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.072278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.072308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.072325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.072459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.072485] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.072501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.072544] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.072576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.072598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.072623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.072636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.072653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.072687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.072702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.072744] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.072765] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.082448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.082480] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.082735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.082766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.339 [2024-10-07 13:36:33.082784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.082867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.082895] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.082911] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.086206] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.086239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.339 [2024-10-07 13:36:33.087144] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.087169] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.087190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.087208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.339 [2024-10-07 13:36:33.087222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.339 [2024-10-07 13:36:33.087235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.339 [2024-10-07 13:36:33.087361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.087382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.339 [2024-10-07 13:36:33.092556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.092601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.339 [2024-10-07 13:36:33.092816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.092846] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.339 [2024-10-07 13:36:33.092863] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.339 [2024-10-07 13:36:33.093016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.339 [2024-10-07 13:36:33.093044] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.093067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.093086] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.093112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.093131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.093150] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.093164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.093190] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.093207] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.093219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.093248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.093272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.102640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.102838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.102869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.102887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.102924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.102956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.102989] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.103006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.103020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.103044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.103193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.103221] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.103238] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.103263] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.103287] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.103303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.103316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.103341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.117864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.117914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.118050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.118080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.118097] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.118187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.118219] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.118237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.118255] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.118282] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.118300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.118313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.118325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.118350] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.118367] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.118380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.118393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.118416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.127950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.128131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.128160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.128178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.130552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.134757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.134799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.134820] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.134833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.135355] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.135496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.135525] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.135542] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.136045] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.136291] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.136315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.136329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.136382] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.138147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.138295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.138325] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.138342] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.138367] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.138391] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.138406] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.138420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.138444] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.146203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.146464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.146495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.146513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.146621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.146751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.146788] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.146803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.146923] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.151434] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.151818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.151850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.151871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.151925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.152111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.152135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.152150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.152201] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.156647] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.156797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.156827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.156850] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.156877] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.156902] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.156917] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.156930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.156955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.161717] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.161957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.161987] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.162005] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.162112] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.162224] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.162260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.162274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.165535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.167288] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.167430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.167460] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.167477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.167503] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.167527] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.167542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.167556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.167580] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.172307] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.172466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.172495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.172513] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.172538] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.172563] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.172584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.172598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.172623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.181619] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.182200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.182247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.182272] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.182508] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.182743] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.182768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.182783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.182837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.182863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.182999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.183027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.183044] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.183238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.183310] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.183331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.183344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.183384] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.192810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.193065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.193096] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.193114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.195405] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.195821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.195874] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.195906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.195921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.196920] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.197046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.197075] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.197092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.197527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.197775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.197799] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.197814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.197865] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.202907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.203101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.203129] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.340 [2024-10-07 13:36:33.203146] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.207386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.207603] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.207631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.207645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.207768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.207794] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.208337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.208368] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.340 [2024-10-07 13:36:33.208385] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.340 [2024-10-07 13:36:33.210381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.340 [2024-10-07 13:36:33.210701] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.340 [2024-10-07 13:36:33.210730] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.340 [2024-10-07 13:36:33.210745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.340 [2024-10-07 13:36:33.211560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.340 [2024-10-07 13:36:33.213000] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.340 [2024-10-07 13:36:33.213236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.340 [2024-10-07 13:36:33.213267] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.213284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.213315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.213340] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.213354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.213368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.213394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.217861] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.218010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.218037] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.218055] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.218760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.218927] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.218949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.218962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.219068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.225290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.225610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.225642] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.225660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.225722] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.225751] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.225767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.225780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.225963] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.227944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.228091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.228118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.228134] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.228160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.228184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.228200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.228219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.228244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.237254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.237521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.237552] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.237571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.237692] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.239974] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.240001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.240016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.241075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.241429] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.241790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.241821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.241838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.241890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.242351] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.242390] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.242404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.242637] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.247602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.247740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.247769] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.247786] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.247812] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.247853] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.247874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.247888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.247912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.254794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.254933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.254967] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.254985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.255010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.255035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.255050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.255064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.255088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.258870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.259062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.259092] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.259109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.259135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.259159] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.259174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.259187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.259213] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.270356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.270408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.270536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.270564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.270580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.270698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.270725] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.270742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.270761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.270787] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.270806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.270819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.270832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.270857] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.270883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.270897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.270911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.270935] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.282898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.282932] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.283151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.283181] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.283198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.283334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.283360] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.283376] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.283483] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.283509] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.283627] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.283648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.283663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.283690] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.283705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.283717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.283931] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.283953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.293010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.293057] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.293191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.293218] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.293234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.293352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.293378] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.293394] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.293419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.293446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.293464] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.293478] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.293491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.293516] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.293534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.293546] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.293559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.293597] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.303696] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.303729] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.303873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.303902] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.303920] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.304002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.304027] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.304043] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.304229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.304258] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.304487] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.304511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.304526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.304544] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.304559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.304572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.304622] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.304643] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.319384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.319417] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.319551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.319586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.319604] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.319725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.319752] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.319769] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.319795] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.319817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.341 [2024-10-07 13:36:33.319838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.319853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.319867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.319884] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.341 [2024-10-07 13:36:33.319900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.341 [2024-10-07 13:36:33.319913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.341 [2024-10-07 13:36:33.319938] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.319953] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.341 [2024-10-07 13:36:33.332587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.332621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.341 [2024-10-07 13:36:33.332914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.332944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.341 [2024-10-07 13:36:33.332962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.341 [2024-10-07 13:36:33.333049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.341 [2024-10-07 13:36:33.333076] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.341 [2024-10-07 13:36:33.333092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.333119] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.333140] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.333161] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.333177] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.333190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.333208] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.333222] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.333241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.333266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.333283] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.346182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.346216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.346462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.346491] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.346508] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.346593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.346619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.346635] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.346782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.346811] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.347048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.347071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.347086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.347103] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.347119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.347132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.347196] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.347232] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.359738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.359773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.360064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.360097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.360115] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.360219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.360245] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.360260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.360464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.360499] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.360549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.360570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.360585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.360602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.360616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.360629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.360654] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.360681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.375177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.375212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.376054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.376086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.376103] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.376217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.376242] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.376258] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.376733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.376766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.377043] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.377068] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.377083] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.377116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.377132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.377145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.377380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.377404] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.390817] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.390852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.391352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.391383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.391406] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.391547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.391572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.391589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.391815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.391845] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.392046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.392070] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.392085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.392102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.392117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.392130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.392343] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.392368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.406550] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.406584] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.406750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.406780] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.406797] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.406881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.406907] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.406923] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.406948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.406970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.406991] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.407007] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.407020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.407037] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.407051] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.407070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.407096] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.407113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.421840] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.421875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.422022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.422051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.422068] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.422178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.422204] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.422219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.422245] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.422267] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.422288] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.422303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.422318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.422336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.422350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.422363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.422388] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.422405] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.437642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.437700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.438064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.438095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.438113] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.438195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.438220] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.438237] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.438440] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.438469] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.438686] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.438710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.438724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.438741] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.438756] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.438770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.438836] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.438871] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.453230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.453263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.453613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.453645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.453663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.453786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.453812] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.453830] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.454184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.454229] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.454301] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.454321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.454351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.454369] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.454384] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.454397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.454579] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.454602] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.468787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.468820] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.469179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.469211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.342 [2024-10-07 13:36:33.469234] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.469322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.469348] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.469364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.469583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.469612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.342 [2024-10-07 13:36:33.469822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.469849] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.469864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.469882] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.342 [2024-10-07 13:36:33.469897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.342 [2024-10-07 13:36:33.469910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.342 [2024-10-07 13:36:33.469960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.469981] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.342 [2024-10-07 13:36:33.484456] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.484488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.342 [2024-10-07 13:36:33.484837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.484869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.342 [2024-10-07 13:36:33.484886] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.342 [2024-10-07 13:36:33.484969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.342 [2024-10-07 13:36:33.484994] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.485011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.485216] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.485244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.485462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.485487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.485502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.485520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.485535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.485548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.485798] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.485822] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.499812] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.499845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.499993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.500022] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.500039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.500137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.500162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.343 [2024-10-07 13:36:33.500178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.500203] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.500226] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.500248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.500263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.500276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.500293] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.500307] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.500321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.500345] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.500361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.512773] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.512807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.513079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.513110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.343 [2024-10-07 13:36:33.513128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.513240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.513266] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.513283] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.513391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.513418] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.513535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.513561] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.513576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.513593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.513607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.513619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.516732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.516760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.522888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.522935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.523112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.523140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.523158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.523274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.523300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.343 [2024-10-07 13:36:33.523317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.523335] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.523361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.523380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.523394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.523407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.523432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.523450] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.523462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.523476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.523499] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.532972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.533184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.533213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.533231] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.533270] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.533308] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.533339] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.533356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.533384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.533409] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.533579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.533605] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.343 [2024-10-07 13:36:33.533621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.533660] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.343 [2024-10-07 13:36:33.533694] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.343 [2024-10-07 13:36:33.533710] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.343 [2024-10-07 13:36:33.533724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.343 [2024-10-07 13:36:33.533748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.343 [2024-10-07 13:36:33.547212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.547246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.343 [2024-10-07 13:36:33.547780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.343 [2024-10-07 13:36:33.547811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.343 [2024-10-07 13:36:33.547828] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.343 [2024-10-07 13:36:33.547913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.547939] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.547955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.548325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.548368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.548441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.548462] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.548476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.548493] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.548508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.548521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.548717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.548761] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.563114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.563147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.563718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.563750] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.563768] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.563854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.563880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.563896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.564113] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.564142] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.564359] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.564383] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.564397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.564415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.564430] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.564444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.564510] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.564546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.577794] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.577827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.578045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.578073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.578090] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.578169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.578195] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.578212] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.578238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.578260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.578281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.578296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.578315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.578333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.578347] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.578360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.578385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.578416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.593888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.593923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.594722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.594755] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.594773] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.594890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.594917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.594933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.595541] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.595571] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.595816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.595842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.595857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.595875] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.595891] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.595904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.595956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.595978] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.605622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.605656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.605869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.605898] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.605916] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.606022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.606053] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.606070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.606182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.606209] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.606333] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.606356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.606385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.606403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.606417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.606430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.606546] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.606568] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.615753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.615803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.615931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.615958] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.615975] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.616093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.616119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.616135] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.616155] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.616181] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.616199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.616213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.616226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.616252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.616268] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.616283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.616312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.616335] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.625841] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.626029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.626058] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.344 [2024-10-07 13:36:33.626076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.344 [2024-10-07 13:36:33.626287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.344 [2024-10-07 13:36:33.626364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.344 [2024-10-07 13:36:33.626415] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.344 [2024-10-07 13:36:33.626433] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.344 [2024-10-07 13:36:33.626447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.344 [2024-10-07 13:36:33.626630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.344 [2024-10-07 13:36:33.626734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.344 [2024-10-07 13:36:33.626763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.344 [2024-10-07 13:36:33.626779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.626831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.626859] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.626874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.626888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.626913] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 8455.17 IOPS, 33.03 MiB/s [2024-10-07T11:36:38.057Z] [2024-10-07 13:36:33.640108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.640143] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.640524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.640556] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.640574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.640672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.640700] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.640717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.640923] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.640951] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.641167] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.641190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.641211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.641230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.641245] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.641258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.641321] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.641356] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.654104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.654138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.656164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.656196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.656213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.656301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.656330] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.656356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.657025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.657056] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.657477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.657503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.657516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.657534] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.657549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.657562] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.657804] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.657829] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.664418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.664451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.664704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.664733] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.664751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.664840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.664867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.664897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.665576] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.665605] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.665655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.665681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.665712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.665729] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.665743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.665755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.665779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.665795] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.674534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.674583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.674745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.674775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.674792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.675107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.675138] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.675155] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.675175] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.675380] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.675407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.675421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.675434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.675500] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.675522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.675535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.675566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.675590] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.689500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.689540] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.689864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.689897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.689915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.690027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.690052] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.690069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.690276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.690304] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.690352] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.690372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.690386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.690403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.690417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.690430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.690613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.690636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.705013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.705063] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.705638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.705678] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.345 [2024-10-07 13:36:33.705697] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.705785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.705811] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.705826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.706043] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.706071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.345 [2024-10-07 13:36:33.706118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.706139] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.706153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.706176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.345 [2024-10-07 13:36:33.706192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.345 [2024-10-07 13:36:33.706205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.345 [2024-10-07 13:36:33.706401] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.706429] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.345 [2024-10-07 13:36:33.719566] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.719599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.345 [2024-10-07 13:36:33.719754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.345 [2024-10-07 13:36:33.719784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.345 [2024-10-07 13:36:33.719801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.345 [2024-10-07 13:36:33.719893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.719918] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.719933] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.719958] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.719979] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.720002] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.720017] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.720031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.720047] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.720061] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.720074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.720100] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.720116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.729686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.730372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.730519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.730548] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.730565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.735816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.735849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.735867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.735892] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.736569] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.736596] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.736609] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.736622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.736735] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.736758] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.736772] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.736785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.736809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.739770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.739884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.739912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.739929] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.739969] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.739993] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.740008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.740022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.740046] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.740673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.740825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.740856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.740873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.740899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.740923] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.740937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.740951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.740976] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.753187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.753222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.753538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.753571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.753589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.753679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.753706] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.753723] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.753945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.753974] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.754022] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.754057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.754072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.754089] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.754119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.754133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.754316] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.754339] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.768318] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.768352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.768726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.768757] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.768774] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.768863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.768889] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.768906] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.769110] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.769139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.769338] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.769361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.769375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.769392] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.769412] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.769426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.769492] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.769513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.783349] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.783384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.783900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.783934] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.783952] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.784122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.784148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.784164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.784549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.784580] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.784825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.784852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.784867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.784886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.784900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.784914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.784980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.785000] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.346 [2024-10-07 13:36:33.799512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.799546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.346 [2024-10-07 13:36:33.800329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.800361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.346 [2024-10-07 13:36:33.800379] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.800515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.346 [2024-10-07 13:36:33.800540] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.346 [2024-10-07 13:36:33.800556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.346 [2024-10-07 13:36:33.800802] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.800832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.346 [2024-10-07 13:36:33.801032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.346 [2024-10-07 13:36:33.801055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.346 [2024-10-07 13:36:33.801070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.346 [2024-10-07 13:36:33.801088] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.801103] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.801116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.801319] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.801342] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.815104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.815138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.815861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.815893] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.815910] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.816020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.816046] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.816061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.816291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.816319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.816520] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.816543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.816558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.816575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.816590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.816603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.816679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.816717] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.830327] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.830361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.830605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.830639] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.830656] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.830780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.830807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.830823] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.830849] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.830871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.830892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.830908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.830922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.830939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.830953] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.830966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.830991] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.831024] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.841553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.841587] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.843551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.843584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.843602] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.843714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.843741] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.843757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.845985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.846018] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.847007] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.847032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.847045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.847062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.847076] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.847096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.847365] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.847389] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.851917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.851948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.852073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.852101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.852118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.852233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.852259] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.852275] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.852301] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.852323] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.852345] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.852361] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.852373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.852390] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.852405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.852418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.852443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.852459] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.862151] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.862185] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.862352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.862381] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.862399] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.862510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.862536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.862552] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.862746] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.862781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.862831] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.862851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.862865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.862883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.862897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.862910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.347 [2024-10-07 13:36:33.863103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.863126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.347 [2024-10-07 13:36:33.874672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.874707] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.347 [2024-10-07 13:36:33.875081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.875113] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.347 [2024-10-07 13:36:33.875130] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.875269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.347 [2024-10-07 13:36:33.875295] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.347 [2024-10-07 13:36:33.875312] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.347 [2024-10-07 13:36:33.875814] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.875847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.347 [2024-10-07 13:36:33.876168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.347 [2024-10-07 13:36:33.876194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.347 [2024-10-07 13:36:33.876209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.876227] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.876242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.876256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.876495] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.876518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.885409] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.885442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.885675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.885705] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.885728] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.885836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.885862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.885879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.885985] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.886012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.886141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.886162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.886175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.886191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.886221] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.886234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.887261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.887287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.896054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.896087] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.896223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.896252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.896268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.896405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.896431] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.896446] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.896472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.896494] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.896515] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.896530] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.896543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.896560] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.896574] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.896593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.896620] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.896636] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.906171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.906221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.906411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.906439] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.906456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.906574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.906600] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.906616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.906635] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.906920] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.906963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.906977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.906990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.907056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.907077] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.907090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.907120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.907144] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.918536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.918569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.918692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.918722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.918740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.918817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.918843] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.918859] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.919114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.919158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.919235] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.919257] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.919285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.919304] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.919319] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.919332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.919514] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.919537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.928842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.928875] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.929031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.929059] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.929076] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.929185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.929211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.929227] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.929253] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.929275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.929296] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.929311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.929325] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.929342] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.929356] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.929369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.932047] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.932075] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.938972] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.939017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.939214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.939243] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.348 [2024-10-07 13:36:33.939260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.939368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.348 [2024-10-07 13:36:33.939395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.348 [2024-10-07 13:36:33.939411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.348 [2024-10-07 13:36:33.939430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.939565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.348 [2024-10-07 13:36:33.939592] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.939606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.939620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.939748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.939771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.348 [2024-10-07 13:36:33.939786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.348 [2024-10-07 13:36:33.939799] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.348 [2024-10-07 13:36:33.939903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.348 [2024-10-07 13:36:33.950489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.950523] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.348 [2024-10-07 13:36:33.950745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.950775] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:33.950792] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.950871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.950896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:33.950912] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.951096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.951139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.951653] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.951702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.951718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.951736] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.951750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.951764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.951990] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.952014] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.963736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.963770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.964228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.964260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:33.964278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.964387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.964413] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:33.964430] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.964950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.964996] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.965243] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.965266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.965281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.965300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.965315] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.965328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.965541] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.965565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.976415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.976448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.977504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.977536] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:33.977555] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.977640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.977673] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:33.977691] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.978613] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.978644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.979472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.979503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.979519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.979536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.979551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.979564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.979961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.979986] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.986531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.986895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.987003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.987031] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:33.987047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.987397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.987427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:33.987443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.987464] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.987525] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.987549] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.987563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.987576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.987601] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.987620] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.987634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.987646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.987679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.996615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.996759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.996788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:33.996805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.996831] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.996860] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.996877] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.996891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.996915] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:33.996978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:33.997099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:33.997125] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:33.997141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:33.997165] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:33.997188] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:33.997204] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:33.997216] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:33.997254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:34.006916] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:34.007147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:34.007180] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:34.007199] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:34.007338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:34.009990] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:34.010030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:34.010048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:34.010062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:34.011043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:34.011167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:34.011210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:34.011228] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:34.011723] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:34.011946] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:34.011972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:34.011987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:34.012044] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:34.017479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:34.017645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:34.017717] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:34.017740] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:34.018104] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:34.018177] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:34.018198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:34.018213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:34.018239] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:34.023389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:34.025539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:34.025572] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.349 [2024-10-07 13:36:34.025589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:34.026279] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.349 [2024-10-07 13:36:34.026554] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.349 [2024-10-07 13:36:34.026580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.349 [2024-10-07 13:36:34.026595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.349 [2024-10-07 13:36:34.026809] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.349 [2024-10-07 13:36:34.027787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.349 [2024-10-07 13:36:34.027932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.349 [2024-10-07 13:36:34.027960] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.349 [2024-10-07 13:36:34.027977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.349 [2024-10-07 13:36:34.028161] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.028231] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.028252] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.028265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.028305] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.033530] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.033694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.033723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.033746] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.034198] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.034260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.034280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.034307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.034334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.041912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.042521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.042553] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.350 [2024-10-07 13:36:34.042571] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.042797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.042856] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.042876] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.042890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.043073] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.043642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.043797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.043826] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.043843] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.043869] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.043893] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.043908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.043922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.043946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.057451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.057596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.057734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.057764] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.350 [2024-10-07 13:36:34.057780] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.057872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.057906] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.057924] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.057944] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.057971] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.057990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.058004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.058017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.058060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.058082] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.058095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.058128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.058151] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.068985] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.069033] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.069290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.069319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.350 [2024-10-07 13:36:34.069336] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.069422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.069449] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.069466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.069594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.069622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.069737] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.069760] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.069774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.069791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.069805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.069818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.069939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.069961] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.079112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.079158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.079293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.079321] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.079338] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.079486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.079513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.350 [2024-10-07 13:36:34.079529] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.079548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.079574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.079593] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.079606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.079619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.079644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.079661] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.079685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.079699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.079722] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.089211] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.089390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.089421] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.350 [2024-10-07 13:36:34.089439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.089478] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.089509] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.089538] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.089555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.089568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.089592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.089780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.089807] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.350 [2024-10-07 13:36:34.089824] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.350 [2024-10-07 13:36:34.089854] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.350 [2024-10-07 13:36:34.089879] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.350 [2024-10-07 13:36:34.089895] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.350 [2024-10-07 13:36:34.089908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.350 [2024-10-07 13:36:34.089939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.350 [2024-10-07 13:36:34.105225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.105361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.350 [2024-10-07 13:36:34.105541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.350 [2024-10-07 13:36:34.105571] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.105589] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.105686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.105714] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.105730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.105749] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.106284] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.106312] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.106326] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.106339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.106566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.106591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.106606] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.106619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.106833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.119524] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.119557] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.119699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.119728] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.119745] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.119823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.119850] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.119873] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.119899] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.119921] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.119941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.119956] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.119970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.119987] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.120001] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.120014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.120038] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.120055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.133777] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.133810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.135834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.135867] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.135884] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.135970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.135995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.136011] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.136695] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.136727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.137115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.137154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.137167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.137184] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.137198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.137210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.137285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.137306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.143895] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.143949] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.144183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.144212] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.144229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.144348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.144376] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.144392] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.144411] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.144437] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.144456] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.144469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.144481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.144506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.144523] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.144535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.144548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.144570] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.154976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.155010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.155178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.155208] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.155225] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.155340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.155366] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.155383] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.155408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.155430] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.155466] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.155486] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.155499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.155522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.155553] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.155567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.155592] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.155623] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.166952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.166988] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.167386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.167418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.167435] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.167519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.167546] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.167562] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.167924] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.167970] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.168045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.168065] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.168079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.168097] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.168111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.168124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.168307] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.168332] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.177416] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.177450] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.177691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.177724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.177742] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.177854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.177881] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.177897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.178012] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.178040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.178171] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.178193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.178207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.178223] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.178237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.178249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.178385] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.178406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.187839] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.187873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.188109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.188140] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.351 [2024-10-07 13:36:34.188158] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.188241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.188268] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.188284] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.188392] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.188420] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.351 [2024-10-07 13:36:34.188550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.188571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.188584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.188601] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.351 [2024-10-07 13:36:34.188614] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.351 [2024-10-07 13:36:34.188626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.351 [2024-10-07 13:36:34.188703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.188724] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.351 [2024-10-07 13:36:34.197950] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.351 [2024-10-07 13:36:34.198242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.351 [2024-10-07 13:36:34.198274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.351 [2024-10-07 13:36:34.198298] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.351 [2024-10-07 13:36:34.198352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.198388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.198505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.198533] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.198549] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.198564] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.198577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.198590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.198785] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.198815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.198866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.198887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.198902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.198926] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.208269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.208534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.208565] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.208583] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.211871] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.212772] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.212798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.212812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.213215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.213255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.213613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.213644] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.213660] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.213878] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.213941] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.213962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.213976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.214001] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.218356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.218534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.218564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.218581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.218606] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.218630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.218645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.218659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.218693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.228952] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.229002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.229111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.229139] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.229156] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.229252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.229280] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.229297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.229316] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.229342] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.229360] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.229373] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.229387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.229412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.229429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.229441] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.229454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.229477] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.243732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.243766] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.243906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.243937] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.243955] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.244053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.244080] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.244096] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.245393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.245424] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.245782] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.245807] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.245822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.245839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.245854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.245867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.246157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.246183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.257135] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.257168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.259078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.259110] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.259128] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.259211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.259237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.259253] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.260139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.260169] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.260573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.260598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.260618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.260637] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.260652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.260675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.260895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.260919] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.267279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.267326] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.267464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.267494] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.267512] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.267587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.267614] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.267630] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.268068] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.268096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.268128] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.268144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.268157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.268175] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.268189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.268202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.268227] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.268244] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.279213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.279247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.279483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.279513] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.279531] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.279641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.279688] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.279711] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.279737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.279759] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.279780] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.279795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.279808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.279825] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.279839] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.279852] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.279877] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.279893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.291466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.291516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.291858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.291890] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.352 [2024-10-07 13:36:34.291908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.292029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.352 [2024-10-07 13:36:34.292057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.352 [2024-10-07 13:36:34.292074] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.352 [2024-10-07 13:36:34.292278] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.292308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.352 [2024-10-07 13:36:34.292356] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.292378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.292392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.292408] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.352 [2024-10-07 13:36:34.292423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.352 [2024-10-07 13:36:34.292436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.352 [2024-10-07 13:36:34.292618] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.292642] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.352 [2024-10-07 13:36:34.308093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.308133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.352 [2024-10-07 13:36:34.308298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.308329] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.308346] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.308457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.308483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.308500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.308526] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.308548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.308570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.308585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.308598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.308615] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.308630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.308643] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.308679] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.308697] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.321833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.321867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.322191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.322222] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.322240] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.322325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.322352] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.322368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.322574] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.322603] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.322814] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.322838] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.322853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.322876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.322892] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.322904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.323108] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.323133] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.337371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.337420] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.337933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.337975] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.337992] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.338076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.338102] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.338118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.338322] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.338352] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.338861] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.338887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.338907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.338925] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.338939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.338952] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.339204] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.339229] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.348461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.348493] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.348692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.348724] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.348741] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.348881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.348908] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.348930] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.349046] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.349074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.352066] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.352094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.352113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.352130] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.352145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.352157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.353199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.353224] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.358576] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.358622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.358761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.358791] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.358809] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.358924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.358951] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.358967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.358987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.359013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.359032] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.359045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.359058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.359083] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.359100] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.359113] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.359126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.359165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.368818] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.368851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.369021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.369051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.369069] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.369170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.369197] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.369213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.369238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.369260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.369281] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.369296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.369309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.369326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.369340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.369353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.369535] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.369560] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.382995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.383028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.383303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.383334] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.383352] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.383453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.383480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.383497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.383829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.383882] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.384433] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.384472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.384491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.384522] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.353 [2024-10-07 13:36:34.384543] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.353 [2024-10-07 13:36:34.384557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.353 [2024-10-07 13:36:34.384800] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.384825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.353 [2024-10-07 13:36:34.396880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.396915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.353 [2024-10-07 13:36:34.397270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.397300] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.353 [2024-10-07 13:36:34.397319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.397424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.353 [2024-10-07 13:36:34.397451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.353 [2024-10-07 13:36:34.397468] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.353 [2024-10-07 13:36:34.397953] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.353 [2024-10-07 13:36:34.397984] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.398328] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.398353] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.398367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.398385] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.398400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.398413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.398649] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.398684] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.407189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.407221] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.407444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.407474] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.407492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.407577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.407604] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.407621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.407653] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.407685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.407708] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.407724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.407737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.407754] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.407768] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.407782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.407806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.407823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.417299] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.417361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.417487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.417516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.417533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.417870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.417900] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.417917] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.417936] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.418075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.418101] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.418114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.418128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.418235] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.418274] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.418287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.418300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.418413] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.428914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.428948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.429118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.429154] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.429172] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.429285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.429311] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.429327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.429510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.429539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.429602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.429622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.429662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.429692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.429707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.429719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.430199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.430223] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.442370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.442404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.442830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.442862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.442879] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.442973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.442999] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.443015] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.443249] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.443280] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.443330] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.443351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.443365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.443384] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.443398] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.443416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.443903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.443929] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.452776] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.452808] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.453003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.453033] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.453051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.453160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.453187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.453204] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.456021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.456053] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.456952] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.456994] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.457008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.457025] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.457038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.457051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.457805] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.457832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.463048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.463079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.463263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.463293] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.463310] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.463444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.463482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.463499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.463523] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.463551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.463573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.463588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.463602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.463619] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.463633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.463656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.463706] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.463723] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.473198] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.473247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.473380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.473410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.473427] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.473570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.473597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.473613] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.473631] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.473890] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.473933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.473947] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.473960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.474025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.474061] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.474074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.474087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.474112] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.485830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.485864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.486058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.486088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.354 [2024-10-07 13:36:34.486111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.486220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.486247] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.486263] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.486517] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.486547] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.354 [2024-10-07 13:36:34.486604] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.486627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.486641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.486659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.354 [2024-10-07 13:36:34.486685] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.354 [2024-10-07 13:36:34.486699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.354 [2024-10-07 13:36:34.486742] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.486762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.354 [2024-10-07 13:36:34.496284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.496316] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.354 [2024-10-07 13:36:34.496453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.496482] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.354 [2024-10-07 13:36:34.496499] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.354 [2024-10-07 13:36:34.496604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.354 [2024-10-07 13:36:34.496631] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.496647] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.499292] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.499324] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.500199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.500224] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.500246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.500263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.500277] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.500296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.500833] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.500858] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.506396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.506442] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.506573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.506603] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.506620] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.506731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.506759] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.506775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.506794] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.506820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.506838] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.506852] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.506864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.506889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.506907] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.506919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.506933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.506955] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.516568] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.516617] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.516760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.516790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.516807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.516924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.516952] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.516968] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.516987] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.517013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.517038] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.517052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.517064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.517311] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.517336] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.517351] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.517380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.517445] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.529235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.529283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.529785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.529818] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.529835] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.529916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.529941] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.529957] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.530174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.530204] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.530777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.530802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.530822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.530839] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.530854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.530866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.531091] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.531116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.539954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.540001] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.540155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.540185] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.540211] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.540286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.540313] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.540329] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.543189] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.543221] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.543885] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.543909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.543930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.543963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.543978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.543991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.544738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.544779] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.550252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.550283] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.550468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.550497] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.550514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.550650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.550685] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.550703] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.550729] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.550750] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.550771] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.550786] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.550800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.550816] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.550831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.550843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.550874] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.550891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.560451] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.560484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.560603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.560633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.560651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.560770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.560797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.560814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.560840] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.560862] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.560883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.560897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.560911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.560928] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.560942] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.560955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.560980] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.561012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.573363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.573396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.573595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.573626] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.573644] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.573734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.573762] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.573779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.574290] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.574320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.574571] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.574601] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.574616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.574635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.574650] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.574663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.574892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.574917] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.583748] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.583781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.584057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.584088] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.355 [2024-10-07 13:36:34.584105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.584196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.584223] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.584239] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.584346] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.584374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.355 [2024-10-07 13:36:34.587260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.587287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.587308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.587325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.355 [2024-10-07 13:36:34.587340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.355 [2024-10-07 13:36:34.587352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.355 [2024-10-07 13:36:34.588230] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.588270] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.355 [2024-10-07 13:36:34.594134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.594166] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.355 [2024-10-07 13:36:34.594359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.594390] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.355 [2024-10-07 13:36:34.594407] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.355 [2024-10-07 13:36:34.594553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.355 [2024-10-07 13:36:34.594580] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.594597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.594623] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.594645] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.594673] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.594689] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.594703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.594719] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.594734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.594747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.594771] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.594788] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.604247] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.604302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.604433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.604463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.604481] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.604752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.604805] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.604822] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.604841] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.604893] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.604916] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.604929] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.604942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.605124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.605150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.605182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.605195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.605264] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.617263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.617298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.617682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.617713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.617730] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.617839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.617864] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.617880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.618436] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.618465] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.618725] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.618749] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.618764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.618781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.618796] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.618809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.619012] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.619037] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.628705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.628739] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.628965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.628995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.629012] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.629122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.629149] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.629165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.629287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.629315] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.629432] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.629473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.629487] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.629504] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.629518] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.629530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.629660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.629693] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 8458.77 IOPS, 33.04 MiB/s [2024-10-07T11:36:38.068Z] [2024-10-07 13:36:34.638944] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.638976] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.639168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.639198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.639216] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.639324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.639351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.639367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.641612] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.641644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.641778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.641802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.641815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.641833] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.641847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.641860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.641899] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.641918] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.649109] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.649141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.649280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.649310] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.649327] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.649442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.649473] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.649490] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.649683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.649712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.649777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.649813] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.649828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.649845] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.649860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.649872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.650055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.650080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.661663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.661705] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.662052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.662084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.662101] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.662237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.662264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.662280] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.662797] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.662829] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.663150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.663174] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.663188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.663206] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.663220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.663247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.663476] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.663501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.672455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.672488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.672690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.672721] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.672739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.672828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.672855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.672871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.672978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.673005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.673136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.673157] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.673170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.673186] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.673200] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.673212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.674249] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.674274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.683689] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.683721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.356 [2024-10-07 13:36:34.683887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.683917] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.356 [2024-10-07 13:36:34.683934] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.684016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.356 [2024-10-07 13:36:34.684043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.356 [2024-10-07 13:36:34.684060] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.356 [2024-10-07 13:36:34.686031] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.686061] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.356 [2024-10-07 13:36:34.686154] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.686176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.686196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.686215] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.356 [2024-10-07 13:36:34.686229] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.356 [2024-10-07 13:36:34.686242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.356 [2024-10-07 13:36:34.686268] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.686285] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.356 [2024-10-07 13:36:34.693800] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.693850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.694012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.694041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.694058] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.694173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.694201] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.694217] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.694236] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.694262] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.694280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.694294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.694307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.694331] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.694349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.694362] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.694374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.694396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.707258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.707291] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.707553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.707584] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.707601] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.707711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.707739] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.707762] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.708040] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.708069] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.708237] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.708263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.708277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.708295] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.708309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.708322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.708361] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.708380] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.721460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.721494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.721891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.721928] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.721946] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.722064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.722091] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.722108] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.722451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.722507] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.722753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.722778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.722793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.722810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.722825] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.722838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.722903] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.722924] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.736477] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.736516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.736889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.736930] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.736947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.737031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.737056] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.737072] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.737276] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.737306] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.737354] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.737375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.737389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.737407] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.737421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.737434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.737616] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.737640] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.747012] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.747045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.747248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.747279] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.747297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.747376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.747403] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.747419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.747527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.747555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.747683] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.747707] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.747722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.747745] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.747761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.747773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.750452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.750479] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.757124] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.757169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.757362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.757391] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.757408] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.757522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.757549] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.757565] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.757583] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.757609] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.757628] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.757641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.757654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.757687] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.757707] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.757720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.757733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.757756] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.767513] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.767546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.767703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.767734] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.767751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.767832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.767859] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.767876] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.768067] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.768111] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.768173] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.768194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.768208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.768241] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.768256] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.768269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.768748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.768774] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.781100] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.781134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.781436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.781467] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.781485] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.781566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.781594] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.781611] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.782126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.782156] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.782403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.782428] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.782442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.782459] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.782473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.782486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.782707] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.782732] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.791415] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.791448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.792543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.792574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.792592] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.792737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.792765] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.792781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.794520] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.794551] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.357 [2024-10-07 13:36:34.795140] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.795165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.795186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.795203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.357 [2024-10-07 13:36:34.795217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.357 [2024-10-07 13:36:34.795229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.357 [2024-10-07 13:36:34.795741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.795766] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.357 [2024-10-07 13:36:34.801536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.801581] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.357 [2024-10-07 13:36:34.801728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.801758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.357 [2024-10-07 13:36:34.801775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.801921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.357 [2024-10-07 13:36:34.801948] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.357 [2024-10-07 13:36:34.801964] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.357 [2024-10-07 13:36:34.801983] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.802009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.802028] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.802041] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.802055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.802080] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.802102] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.802117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.802130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.802169] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.811637] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.811815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.811862] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.811880] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.811906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.811938] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.812049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.812077] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.812094] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.812110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.812123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.812137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.812162] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.812183] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.812205] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.812220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.812233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.812257] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.824059] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.824093] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.824726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.824758] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.824775] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.824857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.824883] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.824899] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.825269] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.825318] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.825397] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.825417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.825445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.825463] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.825477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.825491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.825517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.825533] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.838681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.838715] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.839307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.839338] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.839356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.839438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.839463] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.839479] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.840308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.840339] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.840753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.840777] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.840791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.840815] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.840829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.840842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.841085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.841111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.855213] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.855246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.855566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.855602] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.855621] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.855728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.855756] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.855772] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.856091] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.856135] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.856688] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.856714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.856745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.856762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.856776] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.856788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.857025] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.857050] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.871613] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.871662] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.872070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.872101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.872120] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.872258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.872285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.872302] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.872505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.872535] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.872746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.872770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.872784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.872802] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.872816] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.872834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.873039] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.873063] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.886811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.886845] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.887182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.887214] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.887232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.887317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.887343] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.887359] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.887564] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.887593] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.887641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.887661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.887685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.887712] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.887727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.887739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.887921] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.887970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.901984] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.902017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.902154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.902183] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.902200] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.902281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.902307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.902323] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.902349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.902377] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.902399] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.902414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.902427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.902444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.902458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.902471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.902496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.902525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.912098] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.912145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.912261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.912289] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.912305] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.912466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.912493] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.358 [2024-10-07 13:36:34.912509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.358 [2024-10-07 13:36:34.912529] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.912555] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.358 [2024-10-07 13:36:34.912573] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.912587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.912600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.915183] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.915212] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.358 [2024-10-07 13:36:34.915227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.358 [2024-10-07 13:36:34.915240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.358 [2024-10-07 13:36:34.919123] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.358 [2024-10-07 13:36:34.922182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.358 [2024-10-07 13:36:34.922301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.358 [2024-10-07 13:36:34.922331] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.358 [2024-10-07 13:36:34.922348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.922393] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.922425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.922455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.922472] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.922485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.922508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.922695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.922722] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.922738] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.922764] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.922788] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.922803] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.922816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.922841] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.935791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.935825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.936242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.936288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:34.936306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.936420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.936446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.936463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.936675] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.936705] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.936753] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.936773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.936787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.936804] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.936819] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.936846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.937306] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.937328] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.950631] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.950690] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.951042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.951073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.951091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.951204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.951231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:34.951247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.951451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.951479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.951689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.951715] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.951730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.951748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.951763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.951776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.951826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.951847] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.965556] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.965590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.965707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.965737] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:34.965754] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.965865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.965891] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.965908] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.965934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.965955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.965984] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.966000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.966014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.966030] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.966045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.966058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.966099] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.966115] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.981745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.982338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.982487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.982516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.982533] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.982879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.982910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:34.982927] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.982947] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.983154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.983179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.983194] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.983207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.983259] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.983280] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.983294] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.983308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.983489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.998332] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.998364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:34.998549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.998579] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:34.998596] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.998687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:34.998713] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:34.998729] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:34.998754] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.998776] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:34.998797] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.998812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.998825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.998842] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:34.998858] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:34.998871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:34.998895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:34.998912] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.012896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.012931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.013738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.013771] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:35.013789] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.013874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.013899] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:35.013915] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.014325] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.014354] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.014581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.014608] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.014623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.014641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.014656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.014678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.014738] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.014760] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.026982] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.027017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.028657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.028699] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:35.028718] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.028826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.028852] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:35.028867] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.029579] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.029626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.030046] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.030071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.030100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.030118] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.030131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.030144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.030218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.030255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.037131] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.037165] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.037303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.037332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:35.037349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.037459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.037484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:35.037500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.037637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.037678] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.037806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.037833] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.037848] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.037866] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.037881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.037894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.038005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.038027] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.047246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.047292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.047430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.047458] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.359 [2024-10-07 13:36:35.047475] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.047563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.359 [2024-10-07 13:36:35.047588] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.359 [2024-10-07 13:36:35.047605] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.359 [2024-10-07 13:36:35.047625] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.047652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.359 [2024-10-07 13:36:35.047679] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.047695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.047709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.049103] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.049131] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.359 [2024-10-07 13:36:35.049145] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.359 [2024-10-07 13:36:35.049158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.359 [2024-10-07 13:36:35.049544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.359 [2024-10-07 13:36:35.061193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.359 [2024-10-07 13:36:35.061227] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.061390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.061418] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.061436] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.061520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.061547] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.061563] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.061589] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.061611] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.061633] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.061648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.061661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.061692] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.061708] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.061721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.061745] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.061762] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.077702] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.077752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.078090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.078123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.078141] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.078251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.078277] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.078293] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.078498] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.078527] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.078575] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.078595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.078609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.078626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.078640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.078653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.078842] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.078885] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.093129] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.093162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.093731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.093763] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.093781] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.093860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.093886] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.093902] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.094120] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.094149] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.094349] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.094372] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.094387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.094404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.094420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.094433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.094496] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.094517] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.108192] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.108226] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.108365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.108394] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.108410] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.108500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.108528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.108544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.108570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.108592] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.108613] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.108628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.108648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.108675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.108693] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.108707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.108751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.108772] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.124353] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.124402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.124734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.124766] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.124784] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.124894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.124921] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.124937] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.125154] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.125182] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.125230] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.125251] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.125265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.125283] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.125296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.125309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.125504] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.125527] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.140485] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.140519] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.140767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.140797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.140814] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.140917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.140949] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.140966] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.140992] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.141013] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.141035] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.141050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.141063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.141080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.141094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.141107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.141132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.141164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.155254] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.155286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.155388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.155417] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.155434] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.155548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.155574] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.155590] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.155616] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.155637] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.155659] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.155683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.155698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.155714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.155728] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.155741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.155767] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.155783] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.169601] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.169635] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.171028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.171061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.171079] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.171191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.171217] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.171232] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.171959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.172005] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.172256] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.172280] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.172294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.172313] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.172328] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.172341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.172545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.172569] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.180046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.180079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.180267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.180296] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.180313] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.180424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.180450] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.180466] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.180925] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.180955] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.180994] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.181009] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.181031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.181048] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.181062] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.181075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.181098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.181113] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.360 [2024-10-07 13:36:35.190176] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.190394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.360 [2024-10-07 13:36:35.190626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.190656] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.360 [2024-10-07 13:36:35.190682] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.190821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.360 [2024-10-07 13:36:35.190849] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.360 [2024-10-07 13:36:35.190865] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.360 [2024-10-07 13:36:35.190885] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.190911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.360 [2024-10-07 13:36:35.190930] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.360 [2024-10-07 13:36:35.190944] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.360 [2024-10-07 13:36:35.190958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.360 [2024-10-07 13:36:35.190983] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.190999] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.191013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.191027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.191272] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.203721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.203754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.203864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.203892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.203909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.203999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.204026] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.204047] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.204074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.204096] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.204117] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.204148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.204161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.204178] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.204191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.204204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.204245] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.204261] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.219810] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.219844] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.220205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.220237] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.220255] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.220367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.220395] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.220411] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.220779] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.220809] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.220894] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.220915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.220930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.220948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.220962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.220974] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.221156] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.221192] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.235825] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.235865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.236220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.236252] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.236270] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.236378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.236404] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.236421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.236640] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.236680] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.236886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.236911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.236925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.236943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.236957] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.236970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.237173] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.237198] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.251891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.251924] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.252267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.252299] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.252317] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.252409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.252435] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.252451] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.252655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.252694] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.252906] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.252930] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.252945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.252968] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.252984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.252997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.253048] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.253068] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.267711] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.267745] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.268296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.268327] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.268344] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.268458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.268484] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.268500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.268727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.268758] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.268960] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.268985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.268999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.269017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.269031] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.269044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.269095] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.269116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.283048] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.283081] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.283422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.283453] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.283471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.283606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.283633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.283649] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.284048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.284093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.284166] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.284186] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.284215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.284233] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.284248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.284261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.284443] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.284466] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.298834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.298867] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.299008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.299039] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.299056] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.299170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.299198] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.299214] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.299239] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.299260] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.299282] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.299296] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.299309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.299326] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.299340] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.299352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.299377] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.299394] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.314389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.314422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.314848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.314880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.314897] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.315008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.315034] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.315051] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.315317] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.315348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.315579] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.315604] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.315618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.315635] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.315649] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.315663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.315879] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.315904] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.361 [2024-10-07 13:36:35.330404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.330436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.361 [2024-10-07 13:36:35.330976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.331007] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.361 [2024-10-07 13:36:35.331024] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.331103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.361 [2024-10-07 13:36:35.331128] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.361 [2024-10-07 13:36:35.331144] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.361 [2024-10-07 13:36:35.331389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.331419] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.361 [2024-10-07 13:36:35.331483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.361 [2024-10-07 13:36:35.331503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.361 [2024-10-07 13:36:35.331533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.361 [2024-10-07 13:36:35.331551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.331571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.331586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.331613] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.331630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.345855] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.345905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.346265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.346297] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.346315] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.346397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.346423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.346439] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.346644] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.346683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.346886] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.346910] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.346925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.346943] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.346958] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.346971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.347036] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.347055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.360303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.360337] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.360497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.360526] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.360544] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.360631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.360659] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.360685] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.360712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.360740] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.360762] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.360778] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.360792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.360809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.360824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.360836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.360876] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.360892] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.375195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.375229] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.376086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.376118] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.376136] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.376248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.376275] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.376291] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.376383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.376409] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.376431] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.376446] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.376459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.376477] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.376491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.376504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.376528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.376544] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.388463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.388499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.391110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.391147] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.391165] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.391257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.391284] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.391300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.392251] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.392281] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.392727] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.392752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.392766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.392784] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.392798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.392812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.393084] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.393111] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.398582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.398905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.399136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.399166] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.399183] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.399394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.399423] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.399440] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.399459] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.399596] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.399622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.399636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.399664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.399808] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.399830] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.399850] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.399864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.401337] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.408691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.408875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.408910] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.408940] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.408965] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.408990] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.409005] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.409019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.409055] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.409082] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.409281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.409308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.409325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.409351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.409374] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.409389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.409403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.409427] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.418788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.418919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.418950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.418967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.421072] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.423193] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.423220] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.423235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.424107] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.424141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.424529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.424559] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.424576] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.424627] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.424655] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.424680] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.424695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.424720] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.430217] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.430369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.430399] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.430417] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.430489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.430892] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.430916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.430930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.430956] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.436545] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.436752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.436784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.436801] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.436828] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.436852] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.436868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.436881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.436907] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.440609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.440865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.440896] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.440914] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.440945] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.440971] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.440991] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.441005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.441030] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.447765] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.448002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.448043] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.448061] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.448171] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.450775] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.450802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.450816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.451733] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.452014] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.452855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.452885] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.362 [2024-10-07 13:36:35.452903] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.453451] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.453714] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.453748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.453762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.453965] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.457853] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.457974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.362 [2024-10-07 13:36:35.458004] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.362 [2024-10-07 13:36:35.458022] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.362 [2024-10-07 13:36:35.458048] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.362 [2024-10-07 13:36:35.458072] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.362 [2024-10-07 13:36:35.458087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.362 [2024-10-07 13:36:35.458107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.362 [2024-10-07 13:36:35.458132] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.362 [2024-10-07 13:36:35.462854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.362 [2024-10-07 13:36:35.463116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.463148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.463166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.463275] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.463404] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.463425] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.463440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.465088] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.468695] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.469602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.469633] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.469650] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.470074] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.470300] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.470325] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.470341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.470393] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.472942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.473107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.473135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.473152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.473177] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.473201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.473216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.473230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.473254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.481334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.481980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.482012] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.482030] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.482254] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.482311] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.482331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.482345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.482371] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.483041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.483160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.483187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.483219] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.483244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.483429] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.483452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.483467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.483589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.492193] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.492449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.492481] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.492500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.492607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.492744] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.492767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.492781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.492889] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.498022] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.498345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.498377] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.498395] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.498446] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.498480] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.498497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.498510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.498702] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.502281] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.502495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.502523] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.502540] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.502565] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.502589] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.502605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.502620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.502644] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.511685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.511809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.511838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.511855] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.511881] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.511905] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.511921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.511935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.511960] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.512363] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.512494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.512524] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.512541] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.512567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.512591] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.512605] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.512619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.512653] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.521778] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.521900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.521929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.521947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.521972] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.521996] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.522011] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.522024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.522049] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.527253] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.527519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.527550] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.527568] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.527594] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.527618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.527634] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.527647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.527681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.532827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.533087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.533119] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.533137] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.533243] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.533285] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.533305] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.533319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.533360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.542133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.542460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.542492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.542530] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.542614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.542812] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.542836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.542850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.542902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.542947] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.543079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.543106] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.543124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.543308] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.543378] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.543414] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.543428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.543453] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.556992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.557024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.557159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.557188] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.557205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.557305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.557332] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.557348] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.557374] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.557396] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.557417] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.557432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.557445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.557462] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.363 [2024-10-07 13:36:35.557482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.363 [2024-10-07 13:36:35.557495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.363 [2024-10-07 13:36:35.557520] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.557537] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.363 [2024-10-07 13:36:35.567102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.567149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.363 [2024-10-07 13:36:35.567260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.567287] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.363 [2024-10-07 13:36:35.567304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.567428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.363 [2024-10-07 13:36:35.567454] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.363 [2024-10-07 13:36:35.567470] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.363 [2024-10-07 13:36:35.567489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.570195] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.363 [2024-10-07 13:36:35.570225] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.570239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.570253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.570705] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.570733] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.570748] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.570761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.570896] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.577187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.577306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.577336] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.577353] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.577552] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.577629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.577687] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.577705] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.577720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.577751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.577843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.577870] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.577887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.577912] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.577936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.577952] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.577965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.577989] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.591233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.591743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.591854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.591884] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.591901] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.592229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.592260] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.592278] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.592297] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.592542] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.592570] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.592585] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.592614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.592844] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.592868] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.592882] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.592895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.592946] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.605687] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.605721] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.606113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.606145] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.606169] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.606306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.606333] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.606349] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.606768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.606799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.607011] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.607036] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.607050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.607068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.607083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.607096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.607147] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.607168] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.620293] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.620340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.620927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.620959] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.620976] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.621059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.621084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.621100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.621319] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.621348] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.621548] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.621571] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.621585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.621602] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.621616] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.621635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.621711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.621748] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.630600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.630632] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.632618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.632651] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.632676] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.632764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.632790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.632806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.635114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.635147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.636062] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.636087] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.636115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.636134] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.636148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.636162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.636727] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.636754] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 8455.07 IOPS, 33.03 MiB/s [2024-10-07T11:36:38.076Z] [2024-10-07 13:36:35.640720] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.640768] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.641055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.641087] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.641105] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.641184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.641210] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.641226] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.641368] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.641402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.641511] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.641533] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.641547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.641565] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.641579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.641592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.641709] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.641746] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.650848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.651329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.651455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.651483] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.651500] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.651686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.651715] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.651732] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.651751] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.652002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.652026] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.652039] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.652052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.652118] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.652139] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.652154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.652167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.652347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.661755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.661789] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.663066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.663100] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.663122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.663207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.663232] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.663248] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.664950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.664982] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.665479] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.665519] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.665533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.665550] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.665563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.665576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.665891] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.665916] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.671870] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.671917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.672134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.672162] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.672178] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.672265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.672292] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.672308] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.672327] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.672353] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.364 [2024-10-07 13:36:35.672371] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.672385] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.672397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.672422] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.672440] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.364 [2024-10-07 13:36:35.672453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.364 [2024-10-07 13:36:35.672471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.364 [2024-10-07 13:36:35.672511] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.364 [2024-10-07 13:36:35.682175] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.682209] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.364 [2024-10-07 13:36:35.682340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.682369] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.364 [2024-10-07 13:36:35.682386] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.682469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.364 [2024-10-07 13:36:35.682495] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.364 [2024-10-07 13:36:35.682511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.364 [2024-10-07 13:36:35.682704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.682733] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.682781] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.682802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.682815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.682832] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.682847] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.682860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.682886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.682902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.694046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.694078] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.694217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.694246] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.694264] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.694335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.694361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.694377] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.694402] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.694423] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.694451] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.694468] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.694481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.694498] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.694513] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.694525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.694550] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.694566] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.709897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.709931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.710176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.710205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.710223] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.710332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.710358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.710374] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.710400] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.710422] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.710444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.710459] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.710473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.710490] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.710506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.710519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.710545] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.710561] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.723152] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.723184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.725040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.725073] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.725091] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.725179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.725205] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.725222] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.725898] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.725930] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.726056] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.726078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.726092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.726110] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.726125] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.726138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.726416] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.726440] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.733263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.733309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.733487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.733516] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.733534] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.733650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.733684] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.733701] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.733719] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.734416] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.734444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.734458] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.734472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.739513] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.739543] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.739558] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.739571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.739710] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.744142] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.744173] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.744312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.744340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.744356] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.744426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.744451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.744467] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.744493] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.744514] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.744535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.744551] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.744565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.744581] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.744595] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.744608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.744633] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.744650] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.756279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.756313] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.756625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.756655] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.756680] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.756761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.756787] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.756803] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.757264] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.757295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.757512] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.757541] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.757557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.757574] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.757589] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.757602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.757815] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.757839] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.771187] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.771222] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.771614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.771647] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.771674] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.771790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.771817] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.771832] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.772037] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.772066] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.772266] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.772288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.772303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.772320] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.772335] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.772348] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.772412] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.772432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.781498] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.781625] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.781801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.781830] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.781847] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.784892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.784929] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.365 [2024-10-07 13:36:35.784947] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.784967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.786201] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.786229] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.786242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.786255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.786881] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.786909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.786924] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.786938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.787197] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.791583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.791761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.791790] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.365 [2024-10-07 13:36:35.791807] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.365 [2024-10-07 13:36:35.791832] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.365 [2024-10-07 13:36:35.791869] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.365 [2024-10-07 13:36:35.791888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.365 [2024-10-07 13:36:35.791902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.365 [2024-10-07 13:36:35.791928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.365 [2024-10-07 13:36:35.791950] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.365 [2024-10-07 13:36:35.792168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.365 [2024-10-07 13:36:35.792196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.792213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.792238] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.792263] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.792278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.792291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.792315] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.801664] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.801814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.801844] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.801861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.801886] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.801914] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.801931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.801945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.801970] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.802015] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.802202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.802230] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.802246] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.802492] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.802599] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.802621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.802635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.802660] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.815906] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.815940] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.816220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.816251] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.816268] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.816382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.816408] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.816423] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.816626] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.816655] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.817200] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.817227] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.817251] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.817269] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.817283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.817295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.817532] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.817556] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.826649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.826692] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.826913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.826943] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.826959] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.827067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.827093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.827109] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.829915] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.829948] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.830425] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.830448] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.830462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.830478] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.830491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.830504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.830607] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.830630] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.836963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.836995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.837232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.837261] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.837277] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.837384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.837410] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.837432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.838060] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.838089] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.838147] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.838166] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.838180] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.838213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.838228] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.838241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.838266] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.838282] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.847077] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.847295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.847445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.847476] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.847493] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.847638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.847672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.847692] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.847711] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.847911] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.847936] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.847951] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.847979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.848043] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.848065] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.848078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.848092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.848116] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.861391] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.861430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.861590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.861619] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.861636] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.861734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.861760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.861777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.861978] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.862021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.862069] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.862105] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.862118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.862136] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.862151] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.862164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.862347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.862370] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.877296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.877329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.877463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.877492] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.877509] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.877585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.877611] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.877627] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.877652] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.877683] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.877706] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.877720] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.877734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.877757] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.877773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.877786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.877810] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.877827] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.891958] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.891991] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.892371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.892402] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.892419] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.892512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.892537] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.892553] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.892768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.892799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.893000] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.893025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.893041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.893058] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.893072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.893087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.893344] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.893369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.906650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.906691] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.906799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.906827] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.366 [2024-10-07 13:36:35.906844] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.906928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.906955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.906977] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.907003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.907025] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.366 [2024-10-07 13:36:35.907045] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.907060] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.907073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.907090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.366 [2024-10-07 13:36:35.907104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.366 [2024-10-07 13:36:35.907117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.366 [2024-10-07 13:36:35.907141] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.907157] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.366 [2024-10-07 13:36:35.919169] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.919203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.366 [2024-10-07 13:36:35.919429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.366 [2024-10-07 13:36:35.919459] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.366 [2024-10-07 13:36:35.919477] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.366 [2024-10-07 13:36:35.919588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.919616] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.919632] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.919748] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.919778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.919896] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.919918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.919932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.919948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.919962] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.919992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.922137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.922164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.929474] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.929507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.929789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.929820] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.929839] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.929920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.929947] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.929963] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.930071] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.930098] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.930214] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.930249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.930262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.930279] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.930293] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.930304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.930349] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.930368] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.939651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.939693] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.939808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.939838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.939856] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.939998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.940025] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.940042] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.940518] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.940548] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.940789] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.940815] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.940829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.940847] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.940867] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.940881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.941085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.941110] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.950019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.950052] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.950279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.950324] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.950341] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.950458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.950486] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.950502] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.953703] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.953736] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.954561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.954586] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.954600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.954617] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.954631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.954644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.955124] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.955148] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.960149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.960194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.960372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.960401] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.960418] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.960553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.960581] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.960597] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.960621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.960648] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.960675] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.960692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.960705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.960730] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.960748] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.960761] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.960774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.960797] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.970233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.970507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.970538] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.970556] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.970622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.970656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.970697] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.970714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.970727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.970751] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.970868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.970897] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.970913] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.970939] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.970963] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.970978] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.970991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.971165] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.983896] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.983930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.984958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.984995] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.985013] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.985100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.985126] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.985142] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.985716] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.985747] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.985998] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.986024] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.986038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.986057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:35.986072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:35.986085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:35.986150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.986186] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:35.998017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.998051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:35.998653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.998707] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:35.998726] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.998807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:35.998833] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:35.998849] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:35.999707] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:35.999737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:36.000158] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.000182] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.000210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.000228] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.000242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.000275] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.000522] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.000548] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.008136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:36.008186] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:36.008367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:36.008397] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:36.008414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:36.008729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:36.008760] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:36.008777] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:36.008796] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:36.008934] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:36.008959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.008974] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.008987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.009094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.009115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.009129] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.009143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.009240] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.018223] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:36.018555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:36.018586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.367 [2024-10-07 13:36:36.018603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:36.018781] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:36.018912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.367 [2024-10-07 13:36:36.018949] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.018966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.018980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.019093] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.019185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.367 [2024-10-07 13:36:36.019213] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.367 [2024-10-07 13:36:36.019230] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.367 [2024-10-07 13:36:36.019337] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.367 [2024-10-07 13:36:36.019441] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.367 [2024-10-07 13:36:36.019463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.367 [2024-10-07 13:36:36.019477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.367 [2024-10-07 13:36:36.021358] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.367 [2024-10-07 13:36:36.028396] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.028538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.028569] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.028586] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.028785] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.028846] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.028868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.028882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.028908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.028995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.029311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.029340] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.029357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.029407] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.029435] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.029451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.029464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.029489] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.041961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.042629] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.042807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.042838] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.042861] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.043321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.043351] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.043368] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.043387] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.043622] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.043649] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.043664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.043689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.043893] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.043918] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.043932] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.043945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.043995] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.052051] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.052278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.052308] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.052326] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.052351] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.052375] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.052391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.052404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.052428] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.056597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.057001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.057032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.057049] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.057607] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.057876] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.057902] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.057922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.058126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.062412] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.062659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.062698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.062716] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.062772] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.062801] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.062817] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.062831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.062854] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.066943] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.067104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.067133] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.067151] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.067176] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.067201] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.067216] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.067230] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.067255] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.072506] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.072679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.072709] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.072727] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.072753] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.072777] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.072793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.072806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.073005] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.077181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.077400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.077430] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.077447] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.077472] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.077497] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.077512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.077526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.077551] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.085137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.085610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.085641] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.085659] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.085875] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.085933] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.085954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.085969] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.085994] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.087274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.087417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.087446] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.087464] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.087489] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.087513] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.087527] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.087541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.087565] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.095806] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.096073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.096104] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.096122] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.097244] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.097472] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.097496] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.097511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.097631] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.097756] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.100686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.100719] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.100736] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.101003] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.101551] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.101575] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.101588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.101860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.105890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.106234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.106264] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.106282] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.106505] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.106630] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.106653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.106677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.106786] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.109381] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.109528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.109558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.109575] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.109601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.109641] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.109661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.109698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.109741] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.116006] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.116420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.116451] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.368 [2024-10-07 13:36:36.116469] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.116687] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.116746] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.116767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.116781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.116806] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.119463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.119606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.368 [2024-10-07 13:36:36.119634] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.368 [2024-10-07 13:36:36.119651] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.368 [2024-10-07 13:36:36.119685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.368 [2024-10-07 13:36:36.119711] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.368 [2024-10-07 13:36:36.119726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.368 [2024-10-07 13:36:36.119740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.368 [2024-10-07 13:36:36.119764] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.368 [2024-10-07 13:36:36.129252] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.368 [2024-10-07 13:36:36.129710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.129742] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.369 [2024-10-07 13:36:36.129759] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.129981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.130044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.130080] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.130097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.130110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.130589] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.130710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.130743] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.369 [2024-10-07 13:36:36.130761] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.130986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.131044] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.131066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.131081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.131352] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.140329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.140633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.140672] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.369 [2024-10-07 13:36:36.140693] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.140801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.140839] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.141153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.141182] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.369 [2024-10-07 13:36:36.141198] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.141213] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.141226] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.141239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.141347] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.141372] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.141503] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.141524] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.141537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.143908] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.150788] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.151002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.151032] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.369 [2024-10-07 13:36:36.151050] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.151075] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.151119] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.151138] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.151152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.151179] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.151199] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.151356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.151384] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.369 [2024-10-07 13:36:36.151400] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.151425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.151448] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.151463] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.151477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.151501] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.161320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.161370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.161525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.161555] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.369 [2024-10-07 13:36:36.161572] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.161680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.161708] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.369 [2024-10-07 13:36:36.161724] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.161743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.161769] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.161787] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.161800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.161813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.161838] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.161855] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.161868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.161881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.161909] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.369 [2024-10-07 13:36:36.172710] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.172743] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.369 [2024-10-07 13:36:36.173104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.173135] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.369 [2024-10-07 13:36:36.173152] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.173255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.369 [2024-10-07 13:36:36.173281] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.369 [2024-10-07 13:36:36.173297] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.369 [2024-10-07 13:36:36.173362] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.173389] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.369 [2024-10-07 13:36:36.173411] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.173426] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.173440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.173457] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.369 [2024-10-07 13:36:36.173471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.369 [2024-10-07 13:36:36.173484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.369 [2024-10-07 13:36:36.173508] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.173525] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.189503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.189536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.189931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.189964] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.370 [2024-10-07 13:36:36.189981] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.190083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.190108] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.190124] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.190331] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.190361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.190561] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.190590] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.190605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.190623] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.190637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.190650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.190863] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.190887] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.204508] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.204541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.204778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.204809] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.204826] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.204911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.204938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.370 [2024-10-07 13:36:36.204954] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.204980] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.205002] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.205023] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.205038] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.205051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.205068] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.205083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.205096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.205121] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.205137] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.214966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.214999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.215306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.215337] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.370 [2024-10-07 13:36:36.215355] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.215463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.215496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.215514] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.218479] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.218512] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.219959] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.219985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.220000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.220017] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.220032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.220046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.220094] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.220114] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.225378] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.225411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.225659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.225697] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.225715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.225827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.225854] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.370 [2024-10-07 13:36:36.225870] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.225897] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.225918] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.225939] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.225954] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.225968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.225985] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.226000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.226013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.226053] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.226069] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.235970] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.236004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.236117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.236146] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.370 [2024-10-07 13:36:36.236164] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.236246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.236274] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.236290] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.370 [2024-10-07 13:36:36.236510] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.236539] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.370 [2024-10-07 13:36:36.236834] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.236859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.236873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.236891] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.370 [2024-10-07 13:36:36.236905] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.370 [2024-10-07 13:36:36.236919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.370 [2024-10-07 13:36:36.236988] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.237009] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.370 [2024-10-07 13:36:36.249191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.249225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.370 [2024-10-07 13:36:36.249861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.370 [2024-10-07 13:36:36.249892] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.370 [2024-10-07 13:36:36.249909] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.250025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.250051] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.371 [2024-10-07 13:36:36.250067] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.250363] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.250394] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.250639] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.250664] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.250701] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.250720] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.250735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.250764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.250832] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.250853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.260230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.260279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.260597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.260627] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.371 [2024-10-07 13:36:36.260645] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.260769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.260797] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.371 [2024-10-07 13:36:36.260813] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.260922] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.260950] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.261053] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.261075] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.261089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.261106] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.261121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.261134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.263333] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.263360] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.270608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.270640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.270763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.270792] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.371 [2024-10-07 13:36:36.270808] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.270885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.270912] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.371 [2024-10-07 13:36:36.270942] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.270968] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.270990] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.271012] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.271027] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.271040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.271057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.271072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.271085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.271109] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.271126] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.371 [2024-10-07 13:36:36.281183] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.281215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.371 [2024-10-07 13:36:36.281376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.281406] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.371 [2024-10-07 13:36:36.281424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.281531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.371 [2024-10-07 13:36:36.281558] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.371 [2024-10-07 13:36:36.281574] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.371 [2024-10-07 13:36:36.281770] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.281800] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.371 [2024-10-07 13:36:36.282001] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.282025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.282039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.371 [2024-10-07 13:36:36.282057] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.371 [2024-10-07 13:36:36.282072] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.371 [2024-10-07 13:36:36.282084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.282150] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.282170] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.293733] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.293771] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.293915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.293944] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.372 [2024-10-07 13:36:36.293962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.294071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.294097] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.372 [2024-10-07 13:36:36.294114] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.294139] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.294160] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.294199] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.294219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.294233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.294250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.294264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.294277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.295645] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.295678] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.304500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.304532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.304769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.304800] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.372 [2024-10-07 13:36:36.304817] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.304928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.304955] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.372 [2024-10-07 13:36:36.304972] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.307778] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.307810] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.308870] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.308897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.308911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.308934] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.308970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.308983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.309729] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.309755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.314612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.314680] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.314849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.314879] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.372 [2024-10-07 13:36:36.314896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.315096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.315123] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.372 [2024-10-07 13:36:36.315139] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.315158] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.315184] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.315203] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.315217] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.315229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.315254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.315286] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.315298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.315311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.315334] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.372 [2024-10-07 13:36:36.324888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.324921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.372 [2024-10-07 13:36:36.325063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.325093] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.372 [2024-10-07 13:36:36.325111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.325217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.372 [2024-10-07 13:36:36.325244] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.372 [2024-10-07 13:36:36.325260] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.372 [2024-10-07 13:36:36.325291] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.325313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.372 [2024-10-07 13:36:36.325335] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.325350] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.325363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.325380] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.372 [2024-10-07 13:36:36.325395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.372 [2024-10-07 13:36:36.325407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.372 [2024-10-07 13:36:36.325432] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.325448] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.338377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.338411] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.339069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.339101] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.373 [2024-10-07 13:36:36.339118] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.339260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.339288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.373 [2024-10-07 13:36:36.339304] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.339524] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.339554] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.339827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.339851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.339865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.339883] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.339897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.339910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.339992] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.340013] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.349563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.349597] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.349837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.349869] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.373 [2024-10-07 13:36:36.349887] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.349996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.350023] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.373 [2024-10-07 13:36:36.350039] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.350147] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.350174] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.351562] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.351587] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.351602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.351618] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.351632] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.351644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.353799] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.353826] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.359704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.359736] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.359850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.359880] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.373 [2024-10-07 13:36:36.359896] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.360034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.360061] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.373 [2024-10-07 13:36:36.360077] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.360102] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.360124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.360145] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.360160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.360174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.360191] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.360211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.360224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.360254] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.360279] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.369819] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.370027] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.370162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.370192] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.373 [2024-10-07 13:36:36.370209] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.370318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.370346] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.373 [2024-10-07 13:36:36.370363] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.370381] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.370567] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.370594] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.370624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.370636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.370711] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.370734] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.370747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.370761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.370942] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.373 [2024-10-07 13:36:36.382495] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.382529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.373 [2024-10-07 13:36:36.383275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.383307] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.373 [2024-10-07 13:36:36.383325] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.383401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.373 [2024-10-07 13:36:36.383427] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.373 [2024-10-07 13:36:36.383443] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.373 [2024-10-07 13:36:36.383817] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.383853] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.373 [2024-10-07 13:36:36.384067] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.373 [2024-10-07 13:36:36.384091] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.373 [2024-10-07 13:36:36.384106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.373 [2024-10-07 13:36:36.384123] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.384137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.384150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.384217] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.384252] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.392626] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.392683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.392826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.392856] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.374 [2024-10-07 13:36:36.392874] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.392955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.392981] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.374 [2024-10-07 13:36:36.392997] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.395815] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.395847] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.396764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.396789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.396804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.396822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.396837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.396850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.397415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.397439] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.402954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.402986] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.403172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.403211] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.374 [2024-10-07 13:36:36.403229] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.403314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.403341] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.374 [2024-10-07 13:36:36.403357] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.403383] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.403404] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.403424] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.403440] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.403454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.403471] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.403485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.403498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.403523] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.403540] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.413134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.413182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.413320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.413350] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.374 [2024-10-07 13:36:36.413367] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.413446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.413471] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.374 [2024-10-07 13:36:36.413487] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.413682] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.413727] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.413791] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.413812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.413826] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.413843] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.413857] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.413875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.414059] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.414098] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.425888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.425922] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.426757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.426788] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.374 [2024-10-07 13:36:36.426806] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.427361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.427407] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.374 [2024-10-07 13:36:36.427424] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.427999] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.428029] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.428148] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.428172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.428187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.428204] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.428219] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.428232] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.428258] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.428275] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.436210] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.436244] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.374 [2024-10-07 13:36:36.436656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.436696] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.374 [2024-10-07 13:36:36.436715] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.436829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.374 [2024-10-07 13:36:36.436855] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.374 [2024-10-07 13:36:36.436871] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.374 [2024-10-07 13:36:36.437026] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.437062] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.374 [2024-10-07 13:36:36.437179] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.437202] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.437217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.437234] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.374 [2024-10-07 13:36:36.437248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.374 [2024-10-07 13:36:36.437261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.374 [2024-10-07 13:36:36.437367] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.437403] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.374 [2024-10-07 13:36:36.446325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.446371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.446547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.446577] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.375 [2024-10-07 13:36:36.446594] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.446919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.446950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.375 [2024-10-07 13:36:36.446967] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.446986] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.447114] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.447141] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.447155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.447168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.447274] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.447317] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.447331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.447344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.447450] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.457772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.457807] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.458145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.458176] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.375 [2024-10-07 13:36:36.458201] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.458331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.458358] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.375 [2024-10-07 13:36:36.458375] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.458425] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.458452] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.458489] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.458509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.458523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.458540] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.458555] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.458568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.458824] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.458849] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.472590] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.472624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.473406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.473438] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.375 [2024-10-07 13:36:36.473456] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.473564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.473591] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.375 [2024-10-07 13:36:36.473607] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.473860] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.473891] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.474090] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.474114] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.474128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.474146] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.474161] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.474173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.474410] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.474435] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.482704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.484215] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.484355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.484383] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.375 [2024-10-07 13:36:36.484401] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.489154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.489187] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.375 [2024-10-07 13:36:36.489205] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.489224] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.489320] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.489344] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.489358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.489371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.489396] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.489414] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.489427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.489441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.489464] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.493274] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.493450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.493480] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.375 [2024-10-07 13:36:36.493497] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.493522] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.493557] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.493572] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.375 [2024-10-07 13:36:36.493585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.375 [2024-10-07 13:36:36.493609] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.375 [2024-10-07 13:36:36.494303] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.375 [2024-10-07 13:36:36.494500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.375 [2024-10-07 13:36:36.494528] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.375 [2024-10-07 13:36:36.494545] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.375 [2024-10-07 13:36:36.494570] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.375 [2024-10-07 13:36:36.494595] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.375 [2024-10-07 13:36:36.494610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.494623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.494647] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.505200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.505471] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.505614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.505645] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.376 [2024-10-07 13:36:36.505663] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.505793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.505821] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.376 [2024-10-07 13:36:36.505838] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.505857] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.506214] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.506250] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.506264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.506277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.506509] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.506535] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.506549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.506563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.506614] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.517170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.517203] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.517414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.517445] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.376 [2024-10-07 13:36:36.517463] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.517574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.517601] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.376 [2024-10-07 13:36:36.517617] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.517745] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.517774] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.520622] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.520648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.520662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.520689] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.520704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.520726] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.521663] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.521698] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.527292] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.527341] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.527469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.527498] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.376 [2024-10-07 13:36:36.527516] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.527602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.527629] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.376 [2024-10-07 13:36:36.527657] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.527685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.527712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.527731] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.527744] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.527757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.527782] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.527799] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.527812] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.527825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.527852] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.537379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.537729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.537761] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.376 [2024-10-07 13:36:36.537779] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.537844] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.537879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.537909] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.537925] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.537938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.537962] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.538056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.538084] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.376 [2024-10-07 13:36:36.538100] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.538126] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.538150] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.376 [2024-10-07 13:36:36.538165] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.376 [2024-10-07 13:36:36.538179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.376 [2024-10-07 13:36:36.538202] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.376 [2024-10-07 13:36:36.551013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.551045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.376 [2024-10-07 13:36:36.551166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.551196] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.376 [2024-10-07 13:36:36.551213] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.551291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.376 [2024-10-07 13:36:36.551319] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.376 [2024-10-07 13:36:36.551335] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.376 [2024-10-07 13:36:36.551361] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.551382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.376 [2024-10-07 13:36:36.551403] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.551423] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.551437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.551455] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.551469] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.551482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.551506] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.551536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.566801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.566837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.567043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.567074] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.377 [2024-10-07 13:36:36.567092] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.567204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.567231] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.377 [2024-10-07 13:36:36.567247] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.567273] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.567295] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.567316] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.567331] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.567346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.567363] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.567378] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.567390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.567415] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.567433] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.579722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.579757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.579972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.580003] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.377 [2024-10-07 13:36:36.580020] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.580128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.580160] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.377 [2024-10-07 13:36:36.580177] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.580285] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.580313] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.580430] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.580451] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.580465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.580482] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.580497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.580510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.582951] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.582979] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.589954] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.589992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.590468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.590499] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.377 [2024-10-07 13:36:36.590524] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.590630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.590674] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.377 [2024-10-07 13:36:36.590694] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.591021] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.591049] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.591116] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.591137] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.591150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.591183] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.591198] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.591211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.591237] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.591253] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.600075] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.600128] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.600259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.600288] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.377 [2024-10-07 13:36:36.600306] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.600660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.600698] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.377 [2024-10-07 13:36:36.600717] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.600737] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.600943] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.600970] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.600984] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.600997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.601060] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.601081] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.601094] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.601107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.601287] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.377 [2024-10-07 13:36:36.614914] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.614948] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.377 [2024-10-07 13:36:36.615269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.615301] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.377 [2024-10-07 13:36:36.615319] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.615427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.377 [2024-10-07 13:36:36.615455] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.377 [2024-10-07 13:36:36.615471] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.377 [2024-10-07 13:36:36.615685] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.615715] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.377 [2024-10-07 13:36:36.615763] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.615784] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.615803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.377 [2024-10-07 13:36:36.615821] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.377 [2024-10-07 13:36:36.615837] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.377 [2024-10-07 13:36:36.615850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.616032] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.616056] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.630230] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.630263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.630619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.630650] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.378 [2024-10-07 13:36:36.630677] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.630759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.630786] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.378 [2024-10-07 13:36:36.630805] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.631009] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.631039] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.631248] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.631273] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.631288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.631315] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.631329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.631342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.631406] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.631441] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 8459.47 IOPS, 33.04 MiB/s [2024-10-07T11:36:38.090Z] [2024-10-07 13:36:36.642248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.642278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.642446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.642475] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.378 [2024-10-07 13:36:36.642492] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.642629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.642677] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.378 [2024-10-07 13:36:36.642696] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.642721] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.642743] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.642764] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.642780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.642793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.642809] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.642824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.642837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.642862] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.642878] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.652355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.652397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.652558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.652586] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.378 [2024-10-07 13:36:36.652603] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.652702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.652731] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.378 [2024-10-07 13:36:36.652748] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.652766] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.652792] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.652810] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.652822] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.652835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.652860] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.652877] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.652889] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.652902] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.652939] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.662431] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.662621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.662649] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.378 [2024-10-07 13:36:36.662672] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.662712] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.662744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.662773] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.662789] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.662802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.662825] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.662938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.662969] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.378 [2024-10-07 13:36:36.662985] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.663010] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.663033] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.663048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.663062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.663085] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.672507] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.672711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.672740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.378 [2024-10-07 13:36:36.672757] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.672782] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.672806] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.672821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.672835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.672870] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 [2024-10-07 13:36:36.672897] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.378 [2024-10-07 13:36:36.673059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.378 [2024-10-07 13:36:36.673086] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.378 [2024-10-07 13:36:36.673102] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.378 [2024-10-07 13:36:36.673133] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.378 [2024-10-07 13:36:36.673157] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.378 [2024-10-07 13:36:36.673172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.378 [2024-10-07 13:36:36.673185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.378 [2024-10-07 13:36:36.673210] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.378 00:25:56.378 Latency(us) 00:25:56.378 [2024-10-07T11:36:38.090Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.378 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:56.378 Verification LBA range: start 0x0 length 0x4000 00:25:56.378 NVMe0n1 : 15.05 8437.70 32.96 0.00 0.00 15102.09 3034.07 44661.57 00:25:56.378 [2024-10-07T11:36:38.090Z] =================================================================================================================== 00:25:56.379 [2024-10-07T11:36:38.091Z] Total : 8437.70 32.96 0.00 0.00 15102.09 3034.07 44661.57 00:25:56.379 [2024-10-07 13:36:36.684879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.685019] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.685162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.685190] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.379 [2024-10-07 13:36:36.685207] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.686012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.686041] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.379 [2024-10-07 13:36:36.686057] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.686076] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.686097] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.686115] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.686128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.686141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.686160] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.686176] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.686189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.686202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.686218] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.694969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.695114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.695148] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.379 [2024-10-07 13:36:36.695166] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.695187] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.695218] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.695237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.695250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.695269] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.695294] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.695398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.695425] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.379 [2024-10-07 13:36:36.695442] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.695463] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.695483] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.695497] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.695510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.695528] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.705041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.705205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.705233] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.379 [2024-10-07 13:36:36.705251] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.705271] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.705294] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.705309] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.705322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.705341] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.705369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.705538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.705564] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.379 [2024-10-07 13:36:36.705580] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.705601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.705626] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.705641] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.705654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.705681] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.715110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.715316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.715344] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.379 [2024-10-07 13:36:36.715361] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.715382] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.715405] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.715420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.715434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.715452] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.715479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.715631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.715657] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.379 [2024-10-07 13:36:36.715681] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.715704] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.715723] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.715738] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.715751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.715768] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.725181] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.725319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.725347] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.379 [2024-10-07 13:36:36.725364] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.725386] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.725406] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.725420] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.725433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.725458] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.725544] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.725753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.725782] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.379 [2024-10-07 13:36:36.725799] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.379 [2024-10-07 13:36:36.725820] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.379 [2024-10-07 13:36:36.725840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.379 [2024-10-07 13:36:36.725854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.379 [2024-10-07 13:36:36.725868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.379 [2024-10-07 13:36:36.725886] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.379 [2024-10-07 13:36:36.735249] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.379 [2024-10-07 13:36:36.735388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.379 [2024-10-07 13:36:36.735415] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ce3790 with addr=10.0.0.2, port=4421 00:25:56.380 [2024-10-07 13:36:36.735432] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ce3790 is same with the state(6) to be set 00:25:56.380 [2024-10-07 13:36:36.735454] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ce3790 (9): Bad file descriptor 00:25:56.380 [2024-10-07 13:36:36.735473] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.380 [2024-10-07 13:36:36.735487] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.380 [2024-10-07 13:36:36.735500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.380 [2024-10-07 13:36:36.735518] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.380 [2024-10-07 13:36:36.735640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:56.380 [2024-10-07 13:36:36.735801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.380 [2024-10-07 13:36:36.735828] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d13f30 with addr=10.0.0.2, port=4422 00:25:56.380 [2024-10-07 13:36:36.735845] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d13f30 is same with the state(6) to be set 00:25:56.380 [2024-10-07 13:36:36.735866] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d13f30 (9): Bad file descriptor 00:25:56.380 [2024-10-07 13:36:36.735887] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:56.380 [2024-10-07 13:36:36.735901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:56.380 [2024-10-07 13:36:36.735915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:56.380 [2024-10-07 13:36:36.735933] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:56.380 Received shutdown signal, test time was about 15.000000 seconds 00:25:56.380 00:25:56.380 Latency(us) 00:25:56.380 [2024-10-07T11:36:38.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.380 [2024-10-07T11:36:38.092Z] =================================================================================================================== 00:25:56.380 [2024-10-07T11:36:38.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # killprocess 1872271 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1872271 ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1872271 00:25:56.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (1872271) - No such process 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@977 -- # echo 'Process with pid 1872271 is not found' 00:25:56.380 Process with pid 1872271 is not found 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # nvmftestfini 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # nvmfcleanup 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.380 rmmod nvme_tcp 00:25:56.380 rmmod nvme_fabrics 00:25:56.380 rmmod nvme_keyring 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@515 -- # '[' -n 1871988 ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # killprocess 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 1871988 ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 1871988' 00:25:56.380 killing process with pid 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 1871988 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # '[' '' == iso ']' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@521 -- # [[ tcp == \t\c\p ]] 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@522 -- # nvmf_tcp_fini 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-save 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # grep -v SPDK_NVMF 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@789 -- # iptables-restore 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@654 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.380 13:36:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@1 -- # exit 1 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh' 'nvmf_failover' '--transport=tcp') 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1155 -- # local args 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:58.283 ========== Backtrace start: ========== 00:25:58.283 00:25:58.283 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_failover"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh"],["--transport=tcp"]) 00:25:58.283 ... 00:25:58.283 1120 timing_enter $test_name 00:25:58.283 1121 echo "************************************" 00:25:58.283 1122 echo "START TEST $test_name" 00:25:58.283 1123 echo "************************************" 00:25:58.283 1124 xtrace_restore 00:25:58.283 1125 time "$@" 00:25:58.283 1126 xtrace_disable 00:25:58.283 1127 echo "************************************" 00:25:58.283 1128 echo "END TEST $test_name" 00:25:58.283 1129 echo "************************************" 00:25:58.283 1130 timing_exit $test_name 00:25:58.283 ... 00:25:58.283 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh:25 -> main(["--transport=tcp"]) 00:25:58.283 ... 00:25:58.283 20 fi 00:25:58.283 21 00:25:58.283 22 run_test "nvmf_identify" $rootdir/test/nvmf/host/identify.sh "${TEST_ARGS[@]}" 00:25:58.283 23 run_test "nvmf_perf" $rootdir/test/nvmf/host/perf.sh "${TEST_ARGS[@]}" 00:25:58.283 24 run_test "nvmf_fio_host" $rootdir/test/nvmf/host/fio.sh "${TEST_ARGS[@]}" 00:25:58.283 => 25 run_test "nvmf_failover" $rootdir/test/nvmf/host/failover.sh "${TEST_ARGS[@]}" 00:25:58.283 26 run_test "nvmf_host_discovery" $rootdir/test/nvmf/host/discovery.sh "${TEST_ARGS[@]}" 00:25:58.283 27 run_test "nvmf_host_multipath_status" $rootdir/test/nvmf/host/multipath_status.sh "${TEST_ARGS[@]}" 00:25:58.283 28 run_test "nvmf_discovery_remove_ifc" $rootdir/test/nvmf/host/discovery_remove_ifc.sh "${TEST_ARGS[@]}" 00:25:58.283 29 run_test "nvmf_identify_kernel_target" "$rootdir/test/nvmf/host/identify_kernel_nvmf.sh" "${TEST_ARGS[@]}" 00:25:58.283 30 run_test "nvmf_auth_host" "$rootdir/test/nvmf/host/auth.sh" "${TEST_ARGS[@]}" 00:25:58.283 ... 00:25:58.283 00:25:58.283 ========== Backtrace end ========== 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1194 -- # return 0 00:25:58.283 00:25:58.283 real 0m23.900s 00:25:58.283 user 1m16.234s 00:25:58.283 sys 0m4.928s 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1 -- # exit 1 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh' 'nvmf_host' '--transport=tcp') 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1155 -- # local args 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:58.283 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.284 ========== Backtrace start: ========== 00:25:58.284 00:25:58.284 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_host"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh"],["--transport=tcp"]) 00:25:58.284 ... 00:25:58.284 1120 timing_enter $test_name 00:25:58.284 1121 echo "************************************" 00:25:58.284 1122 echo "START TEST $test_name" 00:25:58.284 1123 echo "************************************" 00:25:58.284 1124 xtrace_restore 00:25:58.284 1125 time "$@" 00:25:58.284 1126 xtrace_disable 00:25:58.284 1127 echo "************************************" 00:25:58.284 1128 echo "END TEST $test_name" 00:25:58.284 1129 echo "************************************" 00:25:58.284 1130 timing_exit $test_name 00:25:58.284 ... 00:25:58.284 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh:16 -> main(["--transport=tcp"]) 00:25:58.284 ... 00:25:58.284 11 exit 0 00:25:58.284 12 fi 00:25:58.284 13 00:25:58.284 14 run_test "nvmf_target_core" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 15 run_test "nvmf_target_extra" $rootdir/test/nvmf/nvmf_target_extra.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 => 16 run_test "nvmf_host" $rootdir/test/nvmf/nvmf_host.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 17 00:25:58.284 18 # Interrupt mode for now is supported only on the target, with the TCP transport and posix or ssl socket implementations. 00:25:58.284 19 if [[ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" && $SPDK_TEST_URING -eq 0 ]]; then 00:25:58.284 20 run_test "nvmf_target_core_interrupt_mode" $rootdir/test/nvmf/nvmf_target_core.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:25:58.284 21 run_test "nvmf_interrupt" $rootdir/test/nvmf/target/interrupt.sh --transport=$SPDK_TEST_NVMF_TRANSPORT --interrupt-mode 00:25:58.284 ... 00:25:58.284 00:25:58.284 ========== Backtrace end ========== 00:25:58.284 13:36:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1194 -- # return 0 00:25:58.284 00:25:58.284 real 1m23.303s 00:25:58.284 user 3m23.220s 00:25:58.284 sys 0m23.200s 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # trap - ERR 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1125 -- # print_backtrace 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1153 -- # [[ ehxBET =~ e ]] 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1155 -- # args=('--transport=tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh' 'nvmf_tcp' '/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf') 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1155 -- # local args 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1157 -- # xtrace_disable 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:58.284 ========== Backtrace start: ========== 00:25:58.284 00:25:58.284 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh:1125 -> run_test(["nvmf_tcp"],["/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh"],["--transport=tcp"]) 00:25:58.284 ... 00:25:58.284 1120 timing_enter $test_name 00:25:58.284 1121 echo "************************************" 00:25:58.284 1122 echo "START TEST $test_name" 00:25:58.284 1123 echo "************************************" 00:25:58.284 1124 xtrace_restore 00:25:58.284 1125 time "$@" 00:25:58.284 1126 xtrace_disable 00:25:58.284 1127 echo "************************************" 00:25:58.284 1128 echo "END TEST $test_name" 00:25:58.284 1129 echo "************************************" 00:25:58.284 1130 timing_exit $test_name 00:25:58.284 ... 00:25:58.284 in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh:280 -> main(["/var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf"]) 00:25:58.284 ... 00:25:58.284 275 # list of all tests can properly differentiate them. Please do not merge them into one line. 00:25:58.284 276 if [ "$SPDK_TEST_NVMF_TRANSPORT" = "rdma" ]; then 00:25:58.284 277 run_test "nvmf_rdma" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 278 run_test "spdkcli_nvmf_rdma" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 279 elif [ "$SPDK_TEST_NVMF_TRANSPORT" = "tcp" ]; then 00:25:58.284 => 280 run_test "nvmf_tcp" $rootdir/test/nvmf/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 281 if [[ $SPDK_TEST_URING -eq 0 ]]; then 00:25:58.284 282 run_test "spdkcli_nvmf_tcp" $rootdir/test/spdkcli/nvmf.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 283 run_test "nvmf_identify_passthru" $rootdir/test/nvmf/target/identify_passthru.sh --transport=$SPDK_TEST_NVMF_TRANSPORT 00:25:58.284 284 fi 00:25:58.284 285 run_test "nvmf_dif" $rootdir/test/nvmf/target/dif.sh 00:25:58.284 ... 00:25:58.284 00:25:58.284 ========== Backtrace end ========== 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1194 -- # return 0 00:25:58.284 00:25:58.284 real 16m55.804s 00:25:58.284 user 41m25.988s 00:25:58.284 sys 4m14.061s 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1 -- # autotest_cleanup 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1392 -- # local autotest_es=1 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:58.284 13:36:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:10.500 INFO: APP EXITING 00:26:10.500 INFO: killing all VMs 00:26:10.500 INFO: killing vhost app 00:26:10.500 INFO: EXIT DONE 00:26:11.443 Waiting for block devices as requested 00:26:11.443 0000:84:00.0 (8086 0a54): vfio-pci -> nvme 00:26:11.443 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:11.443 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:11.703 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:11.703 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:11.703 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:11.961 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:11.961 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:11.961 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:11.961 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:12.220 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:12.220 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:12.220 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:12.220 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:12.479 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:12.479 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:12.479 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:13.855 Cleaning 00:26:13.855 Removing: /var/run/dpdk/spdk0/config 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:13.855 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:13.855 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:13.855 Removing: /var/run/dpdk/spdk1/config 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:13.855 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:13.855 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:13.855 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:13.855 Removing: /var/run/dpdk/spdk2/config 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:13.855 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:13.855 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:13.855 Removing: /var/run/dpdk/spdk3/config 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:13.855 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:13.855 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:13.855 Removing: /var/run/dpdk/spdk4/config 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:13.855 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:13.855 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:13.855 Removing: /dev/shm/bdev_svc_trace.1 00:26:13.855 Removing: /dev/shm/nvmf_trace.0 00:26:13.855 Removing: /dev/shm/spdk_tgt_trace.pid1683418 00:26:13.855 Removing: /var/run/dpdk/spdk0 00:26:14.114 Removing: /var/run/dpdk/spdk1 00:26:14.114 Removing: /var/run/dpdk/spdk2 00:26:14.114 Removing: /var/run/dpdk/spdk3 00:26:14.114 Removing: /var/run/dpdk/spdk4 00:26:14.114 Removing: /var/run/dpdk/spdk_pid1681288 00:26:14.114 Removing: /var/run/dpdk/spdk_pid1682509 00:26:14.114 Removing: /var/run/dpdk/spdk_pid1683418 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1683808 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1684407 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1684557 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1685299 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1685350 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1685603 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1686863 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1687749 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1688061 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1688261 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1688463 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1688769 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1688924 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1689070 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1689257 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1689806 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1692217 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1692373 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1692533 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1692652 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693021 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693069 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693439 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693494 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693686 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693782 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1693940 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1694067 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1694433 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1694586 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1694894 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1696910 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1699423 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1706224 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1706620 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1709023 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1709291 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1711809 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1715982 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1718070 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1724205 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1729297 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1730448 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1731114 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1741005 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1743192 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1770456 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1773597 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1777256 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1780939 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1781002 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1781568 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1782224 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1782942 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1783815 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1783826 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1783968 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1784096 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1784102 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1784725 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1785349 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1785925 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1786294 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1786364 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1786500 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1787472 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1788176 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1793269 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1820207 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1822982 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1824109 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1825367 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1825508 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1825643 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1825782 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1826323 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1827578 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1828401 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1828822 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1830639 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1831394 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1831929 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1834284 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1837588 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1837589 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1837590 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1839589 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1844257 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1846993 00:26:14.115 Removing: /var/run/dpdk/spdk_pid1850709 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1851659 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1852704 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1853672 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1856455 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1858706 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1862746 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1862750 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1865520 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1865652 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1865894 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1866148 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1866160 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1869143 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1869730 00:26:14.374 Removing: /var/run/dpdk/spdk_pid1872271 00:26:14.374 Clean 00:28:35.829 13:39:04 nvmf_tcp -- common/autotest_common.sh@1451 -- # return 1 00:28:35.829 13:39:04 nvmf_tcp -- common/autotest_common.sh@1 -- # : 00:28:35.829 13:39:04 nvmf_tcp -- common/autotest_common.sh@1 -- # exit 1 00:28:35.841 [Pipeline] } 00:28:35.858 [Pipeline] // stage 00:28:35.866 [Pipeline] } 00:28:35.883 [Pipeline] // timeout 00:28:35.890 [Pipeline] } 00:28:35.895 ERROR: script returned exit code 1 00:28:35.895 Setting overall build result to FAILURE 00:28:35.909 [Pipeline] // catchError 00:28:35.914 [Pipeline] } 00:28:35.928 [Pipeline] // wrap 00:28:35.934 [Pipeline] } 00:28:35.947 [Pipeline] // catchError 00:28:35.955 [Pipeline] stage 00:28:35.957 [Pipeline] { (Epilogue) 00:28:35.970 [Pipeline] catchError 00:28:35.972 [Pipeline] { 00:28:35.984 [Pipeline] echo 00:28:35.986 Cleanup processes 00:28:35.992 [Pipeline] sh 00:28:36.278 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:36.278 1897726 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:36.293 [Pipeline] sh 00:28:36.579 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:36.579 ++ grep -v 'sudo pgrep' 00:28:36.579 ++ awk '{print $1}' 00:28:36.579 + sudo kill -9 00:28:36.579 + true 00:28:36.592 [Pipeline] sh 00:28:36.876 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:42.167 [Pipeline] sh 00:28:42.453 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:42.453 Artifacts sizes are good 00:28:42.466 [Pipeline] archiveArtifacts 00:28:42.472 Archiving artifacts 00:28:42.706 [Pipeline] sh 00:28:43.004 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:43.019 [Pipeline] cleanWs 00:28:43.029 [WS-CLEANUP] Deleting project workspace... 00:28:43.029 [WS-CLEANUP] Deferred wipeout is used... 00:28:43.036 [WS-CLEANUP] done 00:28:43.038 [Pipeline] } 00:28:43.055 [Pipeline] // catchError 00:28:43.066 [Pipeline] echo 00:28:43.068 Tests finished with errors. Please check the logs for more info. 00:28:43.072 [Pipeline] echo 00:28:43.074 Execution node will be rebooted. 00:28:43.090 [Pipeline] build 00:28:43.093 Scheduling project: reset-job 00:28:43.107 [Pipeline] sh 00:28:43.427 + logger -p user.info -t JENKINS-CI 00:28:43.435 [Pipeline] } 00:28:43.451 [Pipeline] // stage 00:28:43.458 [Pipeline] } 00:28:43.473 [Pipeline] // node 00:28:43.478 [Pipeline] End of Pipeline 00:28:43.511 Finished: FAILURE